Blog

Paper accepted at IEEE-ICSC

We are very pleased to announce that our group got four papers accepted for presentation at IEEE-ICSC 2020.

The 14th IEEE International Conference on Semantic Computing (ICSC2020) addresses the derivation, description, integration, and use of semantics (“meaning”, “context”, “intention”) for all types of resources including data, document, tool, device, process and people. The scope of ICSC2020 includes, but is not limited to, analytics, semantics description languages and integration (of data and services), interfaces, and applications.

Here are the pre-prints of the accepted papers with their abstracts:

  • “DISE: A Distributed in-Memory SPARQL Processing Engine over Tensor Data” by Hajira Jabeen, Eskender Haziiev, Gezim Sejdiu, and Jens Lehmann.
    Abstract:SPARQL is a W3C standard for querying the data stored as Resource Description Framework (RDF). The SPARQL queries are represented using triple-patterns, and are tailored to search for these patterns in RDF. Most of the existing SPARQL evaluators provide centralized, DBMS inspired solutions consuming high resources and offering limited flexibility. In order to deal with the increasing RDF data, it is important to develop scalable and efficient solutions for distributed SPARQL query evaluators. In this paper we present DISE — an open source implementation of distributed in-memory SPARQL engine that can scale out to a cluster of machines. DISE represents an RDF graph as a three way distributed tensor for querying large-scale RDF datasets. This distributed tensor representation offers opportunities for novel distributed applications. DISE relies on translating SPARQL queries into Spark tensor operations by exploiting the information about the query complexity and creating a dynamic execution plan. We have tested the scalability and efficiency of DISE on different datasets and the results have been found scalable and efficient while exploiting the relatively new representation format.
  • “Let’s build Bridges, not Walls – SPARQL Querying of TinkerPop Graph Databases with sparql-gremlin” by Harsh Thakkar, Renzo Angles, Marko Rodriguez, Stephen Mallette, and Jens Lehmann.
    Abstract: This article presents sparql-gremlin, a tool to translate SPARQL queries to Gremlin pattern matching traversals. Currently, sparql-gremlin is a plugin of the Apache TinkerPop graph computing framework, thus the users can run queries expressed in the W3C SPARQL query language over a wide variety of graph data management systems, including both OLTP graph databases and OLAP graph processing frameworks. With sparql-gremlin, we perform the first step to bridgethe query interoperability gap between the Semantic Web and Graph database communities. The plugin has received adoption from both academia and industry research in its short timespan.

  • VoColReg: A Registry for Supporting Distributed Ontology Development using Version Control Systems” by Abderrahmane Khiat, Lavdim Halilaj, Ahmad Hemid and Steffen Lohmann (ICSC Resource Track).
    Abstract: The number of ontologies used for different pur-poses, such as data integration, information retrieval or search optimization, is constantly increasing. Therefore, it is crucial that ontologies can be developed and explored in an easy way by humans, and are accessible by intelligent agents. To this end, we created VoColReg on top of the VoCol platform. VoColReg provides an integrated registry that hosts VoCol instances, allowing the community to access, browse, reuse, and improve ontologies in a collaborative fashion. VoColReg integrates several improved features, such as RDF-Doctor which is able to simultaneously identify a comprehensive list of syntax errors and automatically correct a subset of them. Currently, the VoColReg platform hosts more than 21 ontologies from various domains, wherenine of them are publicly available. We analyzed those nine ontologies to discover different facts about them such as hosting platforms used, expressivity of the ontologies, number of triples and modules.

  • Learning a Lightweight Representation: First Step Towards Automatic Detection of Multidimensional Relationships between Ideas” by Abderrahmane Khiat (ICSC Research Track, Concise Paper).
    Abstract: Moving ideation from a closed paradigm (companies) to an open one (crowd) yields several benefits: (1) The crowd allows the generation of a large number of ideas and (2) Its heterogeneity increases the potential in obtaining creative ideas. In practice, however, the crowd often fails at generating innovative solutions, leading to duplicate or ideas that use each other’s description. Thus, it is practically and economically unfeasible to sift through this large number of ideas to select valuable ones. One promising solution to overcome this issue is finding relationships between idea texts such as duplicate, generalize, disjoint, alternative solution, etc. Existing approaches either rely on human judgment, which is expensive and requires domain experts or automatic approaches which compute similarity i.e. one dimension and do not consider other relations. The proposed solution is based on sequence-to-sequence learning, which allows the machine to learn a lightweight structural representation that is used next to establishing complex relations between ideas. This lightweight structural representation is obtained based on our investigation. We found that ideas contain the following patterns: what the idea is about (e.g. window with heat-sensitive material), how it works (e.g. it lights up) and when it works (e.g. in case of fire). Those extracted patterns are then compared with the corresponding patterns of other ideas to establish relations. Our preliminary investigation shows promising results to learn and leverage such lightweight structural representation in identifying the complex relationship between ideas.

Paper accepted at ESWA

We are very pleased to announce that our group got a paper accepted for presentation at ESWA (International Journal for Expert Systems with Applications). With an Impact Factor of 4.3 the journal is one of the major venues in for intelligent systems and information exchange. The focus of the journal is on exchanging information relating to expert and intelligent systems applied in industry, government, and universities worldwide.

Here are the pre-prints of the accepted papers with their abstracts:

Abstract: Open budget data are among the most frequently published datasets of the open data ecosystem, intended to improve public administrations and government transparency. Unfortunately, the prospects of analysis across different open budget data remain limited due to schematic and linguistic differences. Budget and spending datasets are published together with descriptive classifications. Various public administrations typically publish the classifications and concepts in their regional languages. These classifications can be exploited to perform a more in-depth analysis, such as comparing similar items across different, cross-lingual datasets. However, in order to enable such analysis, a mapping across the multilingual classifications of datasets is required. In this paper, we present the framework for Interlinking of Heterogeneous Multilingual Open Fiscal DaTA (IOTA). IOTA makes use of machine translation followed by string similarities to map concepts across different datasets. To the best of our knowledge, IOTA is the first framework to offer scalable implementation of string similarity using distributed computing. The results demonstrate the applicability of the proposed multilingual matching, the scalability of the proposed framework, and an in-depth comparison of string similarity measures.

Paper accepted at ICEGOV

We are very pleased to announce that our group got a paper accepted for presentation at ICEGOV (International Conference on Theory and Practice of Electronic Governance). ICEGOV stands for International Conference on Theory and Practice of Electronic Governance. Established in 2007, the conference runs annually and is coordinated by the United Nations University Operating Unit on Policy-Driven Electronic Governance (UNU-EGOV). Part of the United Nations University and headquartered in the city of Guimarães, north of Portugal, UNU-EGOV is a think tank dedicated to Electronic Governance; a core centre of research, advisory services and training; a bridge between research and public policies; an innovation enhancer; a solid partner within the UN system and its Member States with a particular focus on sustainable development, social inclusion and active citizenship.

Here is the pre-print of the accepted papers with its abstract:

Abstract: To improve governance accountability, public administrations are increasingly publishing their open data, which includes budget and spending data. Analyzing these datasets requires both domain and technical expertise. In civil communities, these technical and domain expertise are often not available. Hence, despite the increasing size of the open fiscal datasets being published, the level of analytics done on top of these datasets is still limited. Providentially, the developments in the computer science community enable further progress in data analysis in different domains, such as performing a comparative analysis of open budgets and spending data (open fiscal data). This is done by adopting and applying semantics on open fiscal data. In this paper, we demonstrate the feasibility of comparative analysis over linked open fiscal data and devise an approach to perform comparative analysis across from different public administrations. Open fiscal data are cleaned, analyzed, transformed (i.e., semantically lied), and have their related concept labels connected across different public administrations so budget/spending items from related concepts can be queried. Additionally, the growing information on linked open data (e.g., DBpedia) can also be used to provide additional context to the analysis and the query.

Update: The paper has received a best paper award nomination.

Paper accepted at K-Cap 2019

We are very pleased to announce that our group got a paper accepted at the K-CAP 2019: The 10th International Conference on Knowledge Capture conference, which will be held on 19 – 21 November 2019 Marina del Rey, California, United States.

The 20th International Conference on Knowledge Capture aims at attracting researchers from diverse areas of Artificial Intelligence, including knowledge representation, knowledge acquisition, Semantic and World Wide Web, intelligent user interfaces for knowledge acquisition and retrieval, innovative query processing and question answering over heterogeneous knowledge bases, novel evaluation paradigms, problem-solving and reasoning, planning, agents, information extraction from text, metadata, tables and other heterogeneous data such as images and videos, machine learning and representation learning, information enrichment and visualization, as well as researchers interested in cyber-infrastructures to foster the publication, retrieval, reuse, and integration of data.

Here is the pre-print of the accepted paper with its abstract:

  • GizMO — A Customizable Representation Model for Graph-Based Visualizations of Ontologiesby Vitalis Wiens, Steffen Lohmann, and Sören Auer.
    Abstract: Visualizations can support the development, exploration, communication, and sense-making of ontologies. Suitable visualizations, however, are highly dependent on individual use cases and targeted user groups. In this article, we present a methodology that enables customizable definitions for the visual representation of ontologies. The methodology describes visual representations using the OWL annotation mechanisms and separates the visual abstraction into two information layers. The first layer describes the graphical appearance of OWL constructs. The second layer addresses visual properties for conceptual elements from the ontology. Annotation ontologies and a modular architecture enable separation of concerns for individual information layers. Furthermore, the methodology ensures the separation between the ontology and its visualization. We showcase the applicability of the methodology by introducing GizMO, a representation model for graph-based visualizations in the form of node-link diagrams. The graph visualization meta ontology (GizMO) provides five annotation object types that address various aspects of the visualization (e.g., spatial positions, viewport zoom factor, and canvas background color). The practical use of the methodology and GizMO is shown using two applications that indicate the variety of achievable ontology visualizations.

Acknowledgment

This work is co-funded by the European Research Council project ScienceGRAPH (Grant agreement #819536). In addition, parts of it evolved in the context of the Fraunhofer Cluster of Excellence “Cognitive Internet Technologies”.


Looking forward to seeing you at The K-Cap 2019

Paper accepted at ODBASE 2019

We are very pleased to announce that our group got a paper accepted at the ODBASE 2019 – The 18th International Conference on Ontologies, DataBases, and Applications of Semantics conference, which will be held on 22-23 October 2019, Rhodes, Greece.

The conference on Ontologies, DataBases, and Applications of Semantics for Large Scale Information Systems (ODBASE’19) provides a forum on the use of ontologies, rules and data semantics in novel applications. Of particular relevance to ODBASE are papers that bridge traditional boundaries between disciplines such as artificial intelligence and the Semantic Web, databases, data science, data analytics and machine learning, human-computer interaction, social networks, distributed and mobile systems, data and information retrieval, knowledge discovery, and computational linguistics.

Here is the pre-print of the accepted paper with its abstract:

Abstract: Question answering systems have often a pipeline architecture that consists of multiple components. A key component in the pipeline is the query generator, which aims to generate a formal query that corresponds to the input natural language question. Even if the linked entities and relations to an underlying knowledge graph are given, finding the corresponding query that captures the true intention of the input question still remains a challenging task, due to the complexity of sentence structure or the features that need to be extracted. In this work, we focus on the query generation component and introduce techniques to support a wider range of questions that are currently less represented in the community of question answering.

Acknowledgment

This research was supported by the European Union H2020 project CLEOPATRA (ITN, GA. 812997) as well as by the German Federal Ministry of Education and Research (BMBF) funding for the project SOLIDE (no. 13N14456).


Looking forward to seeing you at The ODBASE 2019

Paper accepted at iiWAS 2019

We are very happy to announce that our group got one paper accepted at iiWAS 2019: The 21st International Conference on Information Integration and Web-based Applications & Services, which will be held on December 2 – 4 in Munich, Germany.

The 21st International Conference on Information Integration and Web-based Applications & Services (iiWAS2019) is a leading international conference for researchers and industry practitioners to share their new ideas, original research results and practical development experiences from all information integration and web-based applications & services related areas.

iiWAS2019 is endorsed by the International Organization for Information Integration and Web-based Applications & Services (@WAS), and will be held from 2-4 December 2019, in Munich, Germany, the city of innovation, technology, art and culture in conjunction with the 17th International Conference on Advances in Mobile Computing & Multimedia (MoMM2019).

Here is the pre-print of the accepted paper with its abstract: 

  • Uniform Access to Multiform Data Lakes using Semantic Technologies” by Mohamed Nadjib Mami, Damien Graux, Simon Scerri, Hajira Jabeen, Sören Auer, and Jens Lehmann.
  • Abstract:  Increasing data volumes have extensively increased application possibilities. However, accessing this data in an ad hoc manner remains an unsolved problem due to the diversity of data management approaches, formats and storage frameworks, resulting in the need to effectively access and process distributed heterogeneous data at scale. For years, Semantic Web techniques have addressed data integration challenges with practical knowledge representation models and ontology-based mappings. Leveraging these techniques, we provide a solution enabling uniform access to large, heterogeneous data sources, without enforcing centralization; thus realizing the vision of a Semantic Data Lake. In this paper, we define the core concepts underlying this vision and the architectural requirements that systems implementing it need to fulfill. Squerall, an example of such a system, is an extensible framework built on top of state-of-the-art Big Data technologies. We focus on Squerall’s distributed query execution techniques and strategies, empirically evaluating its performance throughout its various sub-phases.

Acknowledgement
This work is partly supported by the EU H2020 projects BETTER (GA 776280) and QualiChain (GA 822404), and by the ADAPT Centre for Digital Content Technology funded under the SFI Research Centres Programme (Grant 13/RC/2106) and co-funded under the European Regional Development Fund.


Looking forward to seeing you at The iiWAS 2019.

Demo and Poster Papers accepted at ISWC 2019

We are very pleased to announce that our group got 7 demo/poster papers accepted for presentation at ISWC 2019: the 18th International Semantic Web Conference, which will be held on October 26 – 30 2019 in Auckland, New Zealand.

The International Semantic Web Conference (ISWC) is the premier international forum where Semantic Web / Linked Data researchers, practitioners, and industry specialists come together to discuss, advance, and shape the future of semantic technologies on the web, within enterprises and in the context of the public institution.

Here is the list of the accepted papers with their abstract:

  • Querying large-scale RDF datasets using the SANSA frameworkby Claus Stadler, Gezim Sejdiu, Damien Graux, and Jens Lehmann.
    Abstract: In this paper, we present Sparklify: a scalable software component for efficient evaluation of SPARQL queries over distributed RDF datasets. In particular, we demonstrate a W3C SPARQL endpoint powered by our SANSA framework’s RDF partitioning system and Apache SPARK for querying the DBpedia knowledge base. This work is motivated by the lack of Big Data SPARQL systems that are capable of exposing large-scale heterogeneous RDF datasets via a Web SPARQL endpoint.
  • How to feed the Squerall with RDF and other data nuts?by Mohamed Nadjib Mami, Damien Graux, Simon Scerri, Hajira Jabeen, Sören Auer, and Jens Lehmann.
    Abstract: Advances in Data Management methods have resulted in a wide array of storage solutions having varying query capabilities and supporting different data formats. Traditionally, heterogeneous data was transformed off-line into a unique format and migrated to a unique data management system, before being uniformly queried. However, with the increasing amount of heterogeneous data sources, many of which are dynamic,  modern applications prefer accessing directly the original fresh data. Addressing this requirement, we designed and developed Squerall, a software framework that enables the querying of original large and heterogeneous data on-the-fly without prior data transformation. Squerall is built from the ground up with extensibility in consideration, e.g., supporting more data sources. Here, we explain Squerall’s extensibility aspect and demonstrate step-by-step how to add support for RDF data, a new extension to the previously supported range of data sources.
  • Towards Semantically Structuring GitHubby Dennis Oliver Kubitza, Matthias Böckmann, and Damien Graux.
    Abstract: With the recent increase of open-source projects, tools have emerged to enable developers collaborating. Among these, git has received lots of attention and various on-line platforms have been created around this tool, hosting millions of projects. Recently, some of these platforms opened APIs to allow users questioning their public databases of open-source projects. Despite the common protocol core, there are for now no common structures someone could use to link those sources of information. To tackle this, we propose here the first ontology dedicated to the git protocol and also describe GitHub’s features within it to show how it is extendable to encompass more git-based data sources.
  • Microbenchmarks for Question AnsweringSystems Using QaldGen” by Qaiser Mehmood, Abhishek Nadgeri, Muhammad Saleem, Kuldeep Singh, Axel-Cyrille Ngonga Ngomo and Jens Lehmann.
    Abstract: [Microbenchmarks are used to test the individual components of the given systems. Thus, such benchmarks can provide a more detailed analysis pertaining to the different components of the systems. We present a demo of the QaldGen, a framework for generating question samples for micro-benchmarking of Question Answering (QA) systems over Knowledge Graphs (KGs). QaldGen is able to select customized question samples from existing QA datasets. The sampling of questions is carried out by using different clustering techniques. It is flexible enough to select benchmarks of varying sizes and complexities according to user-defined criteria on the most important features to be considered for QA benchmarking. We evaluate the usability of the interface by using the standard system usability scale questionnaire. Our overall usability score of 77.25 (ranked B+) suggests that the online interface is recommendable easy to use, and well-integrated.
  • FALCON: An Entity and Relation Linking framework over DBpediaby Ahmad Sakor, Kuldeep Singh,  Maria Esther Vidal.
    Abstract: [We tackle the problem of entity and relation linking and present FALCON, a rule-based tool able to accurately map entities and relations in short texts to resources in a knowledge graph. FALCON resorts to fundamental principles of the English morphology (e.g., compounding and headword identification) and performs joint entity and relation linking against a short text. We demonstrate the benefits of the rule-based approach implemented in FALCON on short texts composed of various types of entities. The attendees will observe the behavior of FALCON on the observed limitations of Entity Linking (EL) and Relation Linking (RL) tools. The demo is available at https://labs.tib.eu/falcon/.
  • Demonstration of a Customizable Representation Model for Graph-Based Visualizations of Ontologies – GizMO” by Vitalis Wiens, Mikhail Galkin, Steffen Lohmann, and Sören Auer
    Abstract: Visualizations can facilitate the development, exploration, communication, and sense-making of ontologies. Suitable visualizations, however, are highly dependent on individual use cases and targeted user groups. In this demo, we present a methodology that enables customizable definitions for ontology visualizations. We showcase its applicability by introducing GizMO, a representation model for graph-based visualizations in the form of node-link diagrams. Additionally, we present two applications that operate on the GizMO representation model and enable individual customizations for ontology visualizations.
  • Predict Missing Links Using PyKEEN by Mehdi Ali, Charles Tapley Hoyt,  Daniel Domingo-Fernandez, and Jens Lehmann.
    Abstract:PyKEEN is a framework, which integrates several approaches to compute knowledge graph embeddings (KGEs). We demonstrate the usage of PyKEEN in a biomedical use case, i.e. we trained and evaluated several KGE models on a biological knowledge graph containing genes’ annotations to pathways and pathway hierarchies from well-known databases. We used the best performing model to predict new links and present an evaluation in collaboration with a domain expert.

Acknowledgement
This work has received funding from the EU Horizon 2020 projects BigDataOcean (GA no. 732310), Boost4.0 (GA no. 780732), SLIPO (GA no. 731581) and QROWD (GA no. 723088).


Looking forward to seeing you at The ISWC 2019.

Workshop papers accepted at ECML-PKDD/SoGood 2019

We are very pleased to announce that our group got 2 papers accepted at the 4th Workshop on Data Science for Social Good.

SoGood is a peer-reviewed workshop that focuses on how Data Science can and does contribute to social good in its widest sense. The workshop is held from 2016 yearly together with ECML PKDD Conference and this year is on 20th September, Wurzburg, Germany.

Here is the pre-print of the accepted papers with their abstract:

  • Linking Physicians to Medical Research Results via Knowledge Graph Embeddings and Twitter by  Afshin Sadeghi and  Jens Lehmann.
    Abstract: Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession. 
  • Improving Access to Science for Social Good by Mehdi Ali, Sahar Vahdati, Shruti Singh, Sourish Dasgupta, and Jens Lehmann.
    Abstract: One of the major goals of science is to make the world socially a good place to live. The old paradigm of scholarly communication through publishing has generated enormous amount of heterogeneous data and metadata. However, most scientific results are not easy to discover, in particular those results which benefit social good and are also targeted at non-scientific people. In this paper, we showcase a knowledge graph embedding (KGE) based recommendation system to be used by students involved in activities aiming at social good. The recommendation system has been trained on a scholarly knowledge graph, which we constructed. The obtained results highlight that the KGEs successfully encoded the structure of the KG, and therefore, our system could provide valuable recommendations.

Acknowledgement

This study is partially supported by project MLwin (Maschinelles Lernen mit Wissensgraphen, grant no. 01IS18050F), Cleopatra (grant no. 812997), EPSRC grant EP/M025268/1, the WWTF grant VRG18-013, LAMBDA (GA no. 809965). The authors gratefully acknowledge financial support from the Federal Ministry of Education and Research of Germany (BMBF) which is funding MLwin and European Union Marie Curie ITN that funds Cleopatra, as well as Fraunhofer IAIS.

Paper accepted at TPDL 2019

TPDL-2019We are very pleased to announce that our group got a paper accepted in TPDL 2019 (23nd International Conference on Theory and Practice of Digital Libraries) , which will be held on September 9-12, 2019, OsloMet – Oslo Metropolitan University, Oslo, Norway.

The TPDL is is a well-established scientific and technical forum on the broad topic of digital libraries, bringing together researchers, developers, content providers and users in digital libraries and digital content management. 

TPDL 2019 attempts to facilitate establishing connections and convergences between diverse research communities such as Digital Humanities, Information Sciences and others that could benefit from (and contribute to) ecosystems offered by digital libraries and repositories. To become especially useful to the diverse research and practitioner communities digital libraries need to consider special needs and requirements for effective data utilization, management, and exploitation. 

Here is the pre-print of the accepted paper with its abstract:

Abstract: Recently, semantic data have become more distributed. Available datasets increasingly serve non-technical as well as technical audience. This is also the case with our EVENTSKG dataset, a comprehensive knowledge graph about scientific events, which serves the entire scientific and library community. A common way to query such data is via SPARQL queries. Non-technical users, however, have difficulties with writing SPARQL queries, because it is a time-consuming and error-prone task, and it requires some expert knowledge. This opens the way to natural language interfaces to tackle this problem by making semantic data more accessible to a wider audience, i.e., not restricted to experts. In this work, we present SPARQL-AG, a front-end that automatically generates and executes SPARQL queries for querying EVENT-SKG. SPARQL-AG helps potential semantic data consumers, including non-experts and experts, by generating SPARQL queries, ranging from simple to complex ones, using an interactive web interface. The eminent feature of SPARQL-AG is that users neither need to know the schema of the knowledge graph being queried nor to learn the SPARQL syntax , as SPARQL-AG offers them a familiar and intuitive interface for query generation and execution. It maintains separate clients to query three public SPARQL endpoints when asking for particular entities. The service is publicly available online and has been extensively tested.

Furthermore, we got a poster paper accepted at the Poster & Demo Track.

Here is the list of the accepted poster paper with its abstract:

Abstract: In this work, we tackle the problem of generating comprehensive overviews of research findings in a structured and comparable way. To bring structure to such information and thus to enable researchers to, e.g., explore domain overviews, we present an approach for automatic unveiling of realm overviews for research artifacts (Aurora), an approach to generate overviews of research domains and their relevant artifacts. Aurora is a semi-automatic crowd-sourcing workflow that captures such information into the OpenResearch.org semantic wiki. Our evaluation confirms that Aurora, when compared to the current manual approach, reduces the effort for researchers to compile and read survey papers.

 

Acknowledgment

This work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536).

Looking forward to seeing you at The TPDL 2019.

Papers accepted at SEMANTiCS 2019

semantics-2019-rgbWe are very pleased to announce that our group got 5 papers accepted for presentation at SEMANTiCS 2019: 15th International Conference on Semantic Systems, which will be held on Sept. 09-12, 2019 Karlsruhe Germany.

SEMANTiCS is an established knowledge hub where technology professionals, industry experts, researchers and decision-makers can learn about new technologies, innovations and enterprise implementations in the fields of Linked Data and Semantic AI. Since 2005, the conference series has focused on semantic technologies, which are today together with other methodologies such as NLP and machine learning the core of intelligent systems. The conference highlights the benefits of standards-based approaches.

Here is the list of the accepted papers with their abstract:

Abstract: Over the last two decades, the amount of data which has been created, published and managed using Semantic Web standards and especially via Resource Description Framework (RDF) has been increasing. As a result, efficient processing of such big RDF datasets has become challenging. Indeed, these processes require, both efficient storage strategies and query-processing engines, to be able to scale in terms of data size. In this study, we propose a scalable approach to evaluate SPARQL queries over distributed RDF datasets using a semantic-based partition and is implemented inside the state-of-the-art RDF processing framework: SANSA. An evaluation of the performance of our approach in processing large-scale RDF datasets is also presented. The preliminary results of the conducted experiments show that our approach can scale horizontally and perform well as compared with the previous Hadoop-based system. It is also comparable with the in-memory SPARQL query evaluators when there is less shuffling involved.

Abstract: While the multilingual data on the Semantic Web grows rapidly, the building of multilingual ontologies from monolingual ones is still cumbersome and hampered due to the lack of techniques for cross-lingual ontology enrichment. Cross-lingual ontology enrichment greatly facilitates the semantic interoperability between different ontologies in different natural languages. Achieving such enrichment by human labor is very costly and error-prone. Thus, in this paper, we propose a fully automated ontology enrichment approach (OECM), which builds a multilingual ontology by enriching a monolingual ontology from another one in a different natural language, using a cross-lingual matching technique. OECM selects the best translation among all available translations of ontology concepts based on their semantic similarity with the target ontology concepts. We present a use case of our approach for enriching English Scholarly Communication Ontologies using German and Arabic ontologies from the MultiFarm benchmark. We have compared our results with the results from the Ontology Alignment Evaluation Initiative (OAEI 2018). Our approach has higher precision and recall in comparison to five state-of-the-art approaches. Additionally, we recommend some linguistic corrections in the Arabic ontologies in Multifarm which have enhanced our cross-lingual matching results.

Abstract: The disruptive potential of the upcoming digital transformations for the industrial manufacturing domain have led to several reference frameworks and numerous standardization approaches. On the other hand, the Semantic Web community has elaborated remarkable amounts of work for instance on data and service description, integration of heterogeneous sources and devices, and AI techniques in distributed systems. These two work streams are, however, mostly unrelated and only briefly regard the opposite requirements, practices and terminology. We contribute to this gap by providing the Semantic Asset Administration Shell, a RDF-based representation of the Industrie 4.0 Component. We provide an ontology for the latest data model specification, created a RML mapping, supply resources to validate the RDF entities and introduce basic reasoning on the Asset Administration Shell data model. Furthermore, we discuss the different assumptions and presentation patterns, and analyze the implications of a semantic representation on the original data. We evaluate the thereby created overheads, and conclude that the semantic lifting is manageable also for restricted or embedded devices and therefore meets the conditions of Industrie 4.0 scenarios.

Abstract: Increasing digitization leads to a constantly growing amount of data in a wide variety of application domains. Data analytics, including in particular machine learning, plays the key role to gain actionable insights from this data in a variety of domains and real-world applications. However, configuration of data analytics workflows that include heterogeneous data sources requires significant data science expertise, which hinders wide adoption of existing data analytics frameworks by non-experts. In this paper we present the Simple-ML framework that adopts semantic technologies, including in particular domain-specific semantic data models and dataset profiles, to support efficient configuration, robustness and reusability of data analytics workflows. We present semantic data models that lay the foundation for the framework development and discuss the data analytics workflows based on these models. Furthermore, we present an example instantiation of the Simple-ML data models for a real-world use case in the mobility application domain and discuss the emerging challenges.

Abstract: In the Big Data era, the amount of digital data is increasing exponentially. knowledge graphs are gaining attention to handle the variety dimension of Big Data, allowing machines to understand the semantics present in data. For example, knowledge graphs such as STITCH, SIDER, and DrugBank have been developed in the Biomedical Domain. As the number of data increases, it is critical to perform data analytics. Interaction network analysis is especially important in knowledge graphs,e.g., to detect drug-target interactions. Having a good target identification approach helps in accelerating and reducing the cost of discovering new medicines. In this work, we propose a machine learning-based approach that combines two inputs: (1) interactions and similarities among entities, and (2) translation to embeddings technique. We focus on the problem of discovering missing links in the data, called link prediction. Our approach, named SimTransE, is able to analyze the drug-target interactions and similarities. Based on this analysis, SimTransE is able to predict new drug-target interactions. We empirically evaluate Sim-TransE using existing benchmarks and evaluation protocols defined by existing state-of-the-art approaches. Our results demonstrate the good performance of SimTransE in the task of link prediction.

Furthermore, we got 2 demo/poster papers accepted at the Poster & Demo Track.

Here is the list of the accepted poster/demo papers with their abstract:

Abstract: With  the recent  trend on blockchain,  many users want to know more about the important players of the chain. In this study, we investigate  and analyze the Ethereum blockchain network in order to identify the major entities across the transaction network. By leveraging the rich data available through Alethio’s platform in the form of RDF triples we learn about the Hubs and Authorities of the Ethereum transaction network. Alethio uses SANSA for efficient reading and processing of such large-scale RDF data (transactions on Ethereum blockchain) in order to perform analytics e.g. finding top accounts, or typical behavior patterns of exchanges’ deposit wallets and more. 

Abstract: Open Data portals often struggle to provide release features (i.e., stable versioning, up-to-date download links, rich metadata descriptions) for their datasets. By this means, wide adoption of publicly available data collections is hindered, since consuming applications cannot access fresh data sources or might break due to data quality issues. While there exists a variety of tools to efficiently control release processes in software development, the management of dataset releases is not as clear. This paper proposes a deployment pipeline for efficient dataset releases that is based on automated enrichment of DCAT/DataID metadata and is a first step towards efficient deployment pipelining for Open Data publishing.  

Acknowledgment

This work was partially funded by the EU Horizon2020 projects Boost4.0 (GA no. 780732), BigDataOcean (GA no. 732310), SLIPO (GA no. 731581), QROWD (GA no. 723088), the Federal Ministry of Transport and Digital Infrastructure (BMVI) for the LIMBO project (GA no. 9F2029A and 19F2029G), and Simple-ML project.

Looking forward to seeing you at the SEMANTiCS 2019.