PhD Viva (Gezim Sejdiu): Efficient Distributed In-Memory Processing of RDF Datasets

Last week, on Tuesday 29th of September 2020 successfully defended my PhD thesis entitled “Efficient Distributed In-Memory Processing of RDF Datasets”. The main objective of this thesis is to lay foundations for efficient algorithms performing analytics, i.e. exploration, quality assessment, and querying over semantic knowledge graphs at a scale that has not been possible before.

Slides

See below the thesis abstract with references to the main papers, part of the work is based on (see here: https://gezimsejdiu.github.io//publications/ for the complete list of publications).

Abstract

Over the past decade, vast amounts of machine-readable structured information have become available through the automation of research processes as well as the increasing popularity of knowledge graphs and semantic technologies. Today, we count more than 10,000 datasets made available online following Semantic Web standards. A major and yet unsolved challenge that research faces today is to perform scalable analysis of large-scale knowledge graphs in order to facilitate applications in various domains including life sciences, publishing, and the internet of things.
The main objective of this thesis is to lay foundations for efficient algorithms performing analytics, i.e. exploration, quality assessment, and querying over semantic knowledge graphs at a scale that has not been possible before.
First, we propose a novel approach for statistical calculations of large RDF datasets [1], which scales out to clusters of machines.
In particular, we describe the first distributed in-memory approach for computing 32 different statistical criteria for RDF datasets using Apache Spark.
Many applications such as data integration, search, and interlinking, may take full advantage of the data when having a priori statistical information about its internal structure and coverage. However, such applications may suffer from low quality and not being able to leverage the full advantage of the data when the size of data goes beyond the capacity of the resources available.
Thus, we introduce a distributed approach of quality assessment of large RDF datasets [2]. It is the first distributed, in-memory approach for computing different quality metrics for large RDF datasets using Apache Spark. We also provide a quality assessment pattern that can be used to generate new scalable metrics that can be applied to big data.
Based on the knowledge of the internal statistics of a dataset and its quality, users typically want to query and retrieve large amounts of information.
As a result, it has become difficult to efficiently process these large RDF datasets.
Indeed, these processes require, both efficient storage strategies and query-processing engines, to be able to scale in terms of data size.
Therefore, we propose a scalable approach [3, 4] to evaluate SPARQL queries over distributed RDF datasets by translating SPARQL queries into Spark executable code.
We conducted several empirical evaluations to assess the scalability, effectiveness, and efficiency of our proposed approaches.
More importantly, various use cases i.e. Ethereum analysis, Mining Big Data Logs, and Scalable Integration of POIs, have been developed and leverages by our approach.
The empirical evaluations and concrete applications provide evidence that our methodology and techniques proposed during this thesis help to effectively analyze and process large-scale RDF datasets.
All the proposed approaches during this thesis are integrated into the larger SANSA framework [5].

References

[1]. Gezim Sejdiu; Ivan Ermilov; Jens Lehmann; and Mohamed Nadjib-Mami, “DistLODStats: Distributed Computation of RDF Dataset Statistics,” in Proceedings of 17th International Semantic Web Conference (ISWC), 2018.
[2]. Gezim Sejdiu; Anisa Rula; Jens Lehmann; and Hajira Jabeen, “A Scalable Framework for Quality Assessment of RDF Datasets,” in Proceedings of 18th International Semantic Web Conference (ISWC), 2019.
[3]. Claus Stadler; Gezim Sejdiu; Damien Graux; and Jens Lehmann, “Sparklify: A Scalable Software Component for Efficient evaluation of SPARQL queries over distributed RDF datasets,” in Proceedings of 18th International Semantic Web Conference (ISWC), 2019.
[4]. Gezim Sejdiu; Damien Graux; Imran Khan; Ioanna Lytra; Hajira Jabeen; and Jens Lehmann, “Towards A Scalable Semantic-based Distributed Approach for SPARQL query evaluation,” 15th International Conference on Semantic Systems (SEMANTiCS), Research & Innovation, 2019.
[5]. Jens Lehmann; Gezim Sejdiu; Lorenz Bühmann; Patrick Westphal; Claus Stadler; Ivan Ermilov; Simon Bin; Nilesh Chakraborty; Muhammad Saleem; Axel-Cyrille Ngomo Ngonga; and Hajira Jabeen, “Distributed Semantic Analytics using the SANSA Stack,”; in Proceedings of 16th International Semantic Web Conference – Resources Track (ISWC’2017), 2017.

Paper accepted at K-Cap 2019

We are very pleased to announce that our group got a paper accepted at the K-CAP 2019: The 10th International Conference on Knowledge Capture conference, which will be held on 19 – 21 November 2019 Marina del Rey, California, United States.

The 20th International Conference on Knowledge Capture aims at attracting researchers from diverse areas of Artificial Intelligence, including knowledge representation, knowledge acquisition, Semantic and World Wide Web, intelligent user interfaces for knowledge acquisition and retrieval, innovative query processing and question answering over heterogeneous knowledge bases, novel evaluation paradigms, problem-solving and reasoning, planning, agents, information extraction from text, metadata, tables and other heterogeneous data such as images and videos, machine learning and representation learning, information enrichment and visualization, as well as researchers interested in cyber-infrastructures to foster the publication, retrieval, reuse, and integration of data.

Here is the pre-print of the accepted paper with its abstract:

  • GizMO — A Customizable Representation Model for Graph-Based Visualizations of Ontologiesby Vitalis Wiens, Steffen Lohmann, and Sören Auer.
    Abstract: Visualizations can support the development, exploration, communication, and sense-making of ontologies. Suitable visualizations, however, are highly dependent on individual use cases and targeted user groups. In this article, we present a methodology that enables customizable definitions for the visual representation of ontologies. The methodology describes visual representations using the OWL annotation mechanisms and separates the visual abstraction into two information layers. The first layer describes the graphical appearance of OWL constructs. The second layer addresses visual properties for conceptual elements from the ontology. Annotation ontologies and a modular architecture enable separation of concerns for individual information layers. Furthermore, the methodology ensures the separation between the ontology and its visualization. We showcase the applicability of the methodology by introducing GizMO, a representation model for graph-based visualizations in the form of node-link diagrams. The graph visualization meta ontology (GizMO) provides five annotation object types that address various aspects of the visualization (e.g., spatial positions, viewport zoom factor, and canvas background color). The practical use of the methodology and GizMO is shown using two applications that indicate the variety of achievable ontology visualizations.

Acknowledgment

This work is co-funded by the European Research Council project ScienceGRAPH (Grant agreement #819536). In addition, parts of it evolved in the context of the Fraunhofer Cluster of Excellence “Cognitive Internet Technologies”.


Looking forward to seeing you at The K-Cap 2019

Paper accepted at ODBASE 2019

We are very pleased to announce that our group got a paper accepted at the ODBASE 2019 – The 18th International Conference on Ontologies, DataBases, and Applications of Semantics conference, which will be held on 22-23 October 2019, Rhodes, Greece.

The conference on Ontologies, DataBases, and Applications of Semantics for Large Scale Information Systems (ODBASE’19) provides a forum on the use of ontologies, rules and data semantics in novel applications. Of particular relevance to ODBASE are papers that bridge traditional boundaries between disciplines such as artificial intelligence and the Semantic Web, databases, data science, data analytics and machine learning, human-computer interaction, social networks, distributed and mobile systems, data and information retrieval, knowledge discovery, and computational linguistics.

Here is the pre-print of the accepted paper with its abstract:

Abstract: Question answering systems have often a pipeline architecture that consists of multiple components. A key component in the pipeline is the query generator, which aims to generate a formal query that corresponds to the input natural language question. Even if the linked entities and relations to an underlying knowledge graph are given, finding the corresponding query that captures the true intention of the input question still remains a challenging task, due to the complexity of sentence structure or the features that need to be extracted. In this work, we focus on the query generation component and introduce techniques to support a wider range of questions that are currently less represented in the community of question answering.

Acknowledgment

This research was supported by the European Union H2020 project CLEOPATRA (ITN, GA. 812997) as well as by the German Federal Ministry of Education and Research (BMBF) funding for the project SOLIDE (no. 13N14456).


Looking forward to seeing you at The ODBASE 2019

Paper accepted at iiWAS 2019

We are very happy to announce that our group got one paper accepted at iiWAS 2019: The 21st International Conference on Information Integration and Web-based Applications & Services, which will be held on December 2 – 4 in Munich, Germany.

The 21st International Conference on Information Integration and Web-based Applications & Services (iiWAS2019) is a leading international conference for researchers and industry practitioners to share their new ideas, original research results and practical development experiences from all information integration and web-based applications & services related areas.

iiWAS2019 is endorsed by the International Organization for Information Integration and Web-based Applications & Services (@WAS), and will be held from 2-4 December 2019, in Munich, Germany, the city of innovation, technology, art and culture in conjunction with the 17th International Conference on Advances in Mobile Computing & Multimedia (MoMM2019).

Here is the pre-print of the accepted paper with its abstract: 

  • Uniform Access to Multiform Data Lakes using Semantic Technologies” by Mohamed Nadjib Mami, Damien Graux, Simon Scerri, Hajira Jabeen, Sören Auer, and Jens Lehmann.
  • Abstract:  Increasing data volumes have extensively increased application possibilities. However, accessing this data in an ad hoc manner remains an unsolved problem due to the diversity of data management approaches, formats and storage frameworks, resulting in the need to effectively access and process distributed heterogeneous data at scale. For years, Semantic Web techniques have addressed data integration challenges with practical knowledge representation models and ontology-based mappings. Leveraging these techniques, we provide a solution enabling uniform access to large, heterogeneous data sources, without enforcing centralization; thus realizing the vision of a Semantic Data Lake. In this paper, we define the core concepts underlying this vision and the architectural requirements that systems implementing it need to fulfill. Squerall, an example of such a system, is an extensible framework built on top of state-of-the-art Big Data technologies. We focus on Squerall’s distributed query execution techniques and strategies, empirically evaluating its performance throughout its various sub-phases.

Acknowledgement
This work is partly supported by the EU H2020 projects BETTER (GA 776280) and QualiChain (GA 822404), and by the ADAPT Centre for Digital Content Technology funded under the SFI Research Centres Programme (Grant 13/RC/2106) and co-funded under the European Regional Development Fund.


Looking forward to seeing you at The iiWAS 2019.

Demo and Poster Papers accepted at ISWC 2019

We are very pleased to announce that our group got 7 demo/poster papers accepted for presentation at ISWC 2019: the 18th International Semantic Web Conference, which will be held on October 26 – 30 2019 in Auckland, New Zealand.

The International Semantic Web Conference (ISWC) is the premier international forum where Semantic Web / Linked Data researchers, practitioners, and industry specialists come together to discuss, advance, and shape the future of semantic technologies on the web, within enterprises and in the context of the public institution.

Here is the list of the accepted papers with their abstract:

  • Querying large-scale RDF datasets using the SANSA frameworkby Claus Stadler, Gezim Sejdiu, Damien Graux, and Jens Lehmann.
    Abstract: In this paper, we present Sparklify: a scalable software component for efficient evaluation of SPARQL queries over distributed RDF datasets. In particular, we demonstrate a W3C SPARQL endpoint powered by our SANSA framework’s RDF partitioning system and Apache SPARK for querying the DBpedia knowledge base. This work is motivated by the lack of Big Data SPARQL systems that are capable of exposing large-scale heterogeneous RDF datasets via a Web SPARQL endpoint.
  • How to feed the Squerall with RDF and other data nuts?by Mohamed Nadjib Mami, Damien Graux, Simon Scerri, Hajira Jabeen, Sören Auer, and Jens Lehmann.
    Abstract: Advances in Data Management methods have resulted in a wide array of storage solutions having varying query capabilities and supporting different data formats. Traditionally, heterogeneous data was transformed off-line into a unique format and migrated to a unique data management system, before being uniformly queried. However, with the increasing amount of heterogeneous data sources, many of which are dynamic,  modern applications prefer accessing directly the original fresh data. Addressing this requirement, we designed and developed Squerall, a software framework that enables the querying of original large and heterogeneous data on-the-fly without prior data transformation. Squerall is built from the ground up with extensibility in consideration, e.g., supporting more data sources. Here, we explain Squerall’s extensibility aspect and demonstrate step-by-step how to add support for RDF data, a new extension to the previously supported range of data sources.
  • Towards Semantically Structuring GitHubby Dennis Oliver Kubitza, Matthias Böckmann, and Damien Graux.
    Abstract: With the recent increase of open-source projects, tools have emerged to enable developers collaborating. Among these, git has received lots of attention and various on-line platforms have been created around this tool, hosting millions of projects. Recently, some of these platforms opened APIs to allow users questioning their public databases of open-source projects. Despite the common protocol core, there are for now no common structures someone could use to link those sources of information. To tackle this, we propose here the first ontology dedicated to the git protocol and also describe GitHub’s features within it to show how it is extendable to encompass more git-based data sources.
  • Microbenchmarks for Question AnsweringSystems Using QaldGen” by Qaiser Mehmood, Abhishek Nadgeri, Muhammad Saleem, Kuldeep Singh, Axel-Cyrille Ngonga Ngomo and Jens Lehmann.
    Abstract: [Microbenchmarks are used to test the individual components of the given systems. Thus, such benchmarks can provide a more detailed analysis pertaining to the different components of the systems. We present a demo of the QaldGen, a framework for generating question samples for micro-benchmarking of Question Answering (QA) systems over Knowledge Graphs (KGs). QaldGen is able to select customized question samples from existing QA datasets. The sampling of questions is carried out by using different clustering techniques. It is flexible enough to select benchmarks of varying sizes and complexities according to user-defined criteria on the most important features to be considered for QA benchmarking. We evaluate the usability of the interface by using the standard system usability scale questionnaire. Our overall usability score of 77.25 (ranked B+) suggests that the online interface is recommendable easy to use, and well-integrated.
  • FALCON: An Entity and Relation Linking framework over DBpediaby Ahmad Sakor, Kuldeep Singh,  Maria Esther Vidal.
    Abstract: [We tackle the problem of entity and relation linking and present FALCON, a rule-based tool able to accurately map entities and relations in short texts to resources in a knowledge graph. FALCON resorts to fundamental principles of the English morphology (e.g., compounding and headword identification) and performs joint entity and relation linking against a short text. We demonstrate the benefits of the rule-based approach implemented in FALCON on short texts composed of various types of entities. The attendees will observe the behavior of FALCON on the observed limitations of Entity Linking (EL) and Relation Linking (RL) tools. The demo is available at https://labs.tib.eu/falcon/.
  • Demonstration of a Customizable Representation Model for Graph-Based Visualizations of Ontologies – GizMO” by Vitalis Wiens, Mikhail Galkin, Steffen Lohmann, and Sören Auer
    Abstract: Visualizations can facilitate the development, exploration, communication, and sense-making of ontologies. Suitable visualizations, however, are highly dependent on individual use cases and targeted user groups. In this demo, we present a methodology that enables customizable definitions for ontology visualizations. We showcase its applicability by introducing GizMO, a representation model for graph-based visualizations in the form of node-link diagrams. Additionally, we present two applications that operate on the GizMO representation model and enable individual customizations for ontology visualizations.
  • Predict Missing Links Using PyKEEN by Mehdi Ali, Charles Tapley Hoyt,  Daniel Domingo-Fernandez, and Jens Lehmann.
    Abstract:PyKEEN is a framework, which integrates several approaches to compute knowledge graph embeddings (KGEs). We demonstrate the usage of PyKEEN in a biomedical use case, i.e. we trained and evaluated several KGE models on a biological knowledge graph containing genes’ annotations to pathways and pathway hierarchies from well-known databases. We used the best performing model to predict new links and present an evaluation in collaboration with a domain expert.

Acknowledgement
This work has received funding from the EU Horizon 2020 projects BigDataOcean (GA no. 732310), Boost4.0 (GA no. 780732), SLIPO (GA no. 731581) and QROWD (GA no. 723088).


Looking forward to seeing you at The ISWC 2019.

Workshop papers accepted at ECML-PKDD/SoGood 2019

We are very pleased to announce that our group got 2 papers accepted at the 4th Workshop on Data Science for Social Good.

SoGood is a peer-reviewed workshop that focuses on how Data Science can and does contribute to social good in its widest sense. The workshop is held from 2016 yearly together with ECML PKDD Conference and this year is on 20th September, Wurzburg, Germany.

Here is the pre-print of the accepted papers with their abstract:

  • Linking Physicians to Medical Research Results via Knowledge Graph Embeddings and Twitter by  Afshin Sadeghi and  Jens Lehmann.
    Abstract: Informing professionals about the latest research results in their field is a particularly important task in the field of health care, since any development in this field directly improves the health status of the patients. Meanwhile, social media is an infrastructure that allows public instant sharing of information, thus it has recently become popular in medical applications. In this study, we apply Multi Distance Knowledge Graph Embeddings (MDE) to link physicians and surgeons to the latest medical breakthroughs that are shared as the research results on Twitter. Our study shows that using this method physicians can be informed about the new findings in their field given that they have an account dedicated to their profession. 
  • Improving Access to Science for Social Good by Mehdi Ali, Sahar Vahdati, Shruti Singh, Sourish Dasgupta, and Jens Lehmann.
    Abstract: One of the major goals of science is to make the world socially a good place to live. The old paradigm of scholarly communication through publishing has generated enormous amount of heterogeneous data and metadata. However, most scientific results are not easy to discover, in particular those results which benefit social good and are also targeted at non-scientific people. In this paper, we showcase a knowledge graph embedding (KGE) based recommendation system to be used by students involved in activities aiming at social good. The recommendation system has been trained on a scholarly knowledge graph, which we constructed. The obtained results highlight that the KGEs successfully encoded the structure of the KG, and therefore, our system could provide valuable recommendations.

Acknowledgement

This study is partially supported by project MLwin (Maschinelles Lernen mit Wissensgraphen, grant no. 01IS18050F), Cleopatra (grant no. 812997), EPSRC grant EP/M025268/1, the WWTF grant VRG18-013, LAMBDA (GA no. 809965). The authors gratefully acknowledge financial support from the Federal Ministry of Education and Research of Germany (BMBF) which is funding MLwin and European Union Marie Curie ITN that funds Cleopatra, as well as Fraunhofer IAIS.

Paper accepted at TPDL 2019

TPDL-2019We are very pleased to announce that our group got a paper accepted in TPDL 2019 (23nd International Conference on Theory and Practice of Digital Libraries) , which will be held on September 9-12, 2019, OsloMet – Oslo Metropolitan University, Oslo, Norway.

The TPDL is is a well-established scientific and technical forum on the broad topic of digital libraries, bringing together researchers, developers, content providers and users in digital libraries and digital content management. 

TPDL 2019 attempts to facilitate establishing connections and convergences between diverse research communities such as Digital Humanities, Information Sciences and others that could benefit from (and contribute to) ecosystems offered by digital libraries and repositories. To become especially useful to the diverse research and practitioner communities digital libraries need to consider special needs and requirements for effective data utilization, management, and exploitation. 

Here is the pre-print of the accepted paper with its abstract:

Abstract: Recently, semantic data have become more distributed. Available datasets increasingly serve non-technical as well as technical audience. This is also the case with our EVENTSKG dataset, a comprehensive knowledge graph about scientific events, which serves the entire scientific and library community. A common way to query such data is via SPARQL queries. Non-technical users, however, have difficulties with writing SPARQL queries, because it is a time-consuming and error-prone task, and it requires some expert knowledge. This opens the way to natural language interfaces to tackle this problem by making semantic data more accessible to a wider audience, i.e., not restricted to experts. In this work, we present SPARQL-AG, a front-end that automatically generates and executes SPARQL queries for querying EVENT-SKG. SPARQL-AG helps potential semantic data consumers, including non-experts and experts, by generating SPARQL queries, ranging from simple to complex ones, using an interactive web interface. The eminent feature of SPARQL-AG is that users neither need to know the schema of the knowledge graph being queried nor to learn the SPARQL syntax , as SPARQL-AG offers them a familiar and intuitive interface for query generation and execution. It maintains separate clients to query three public SPARQL endpoints when asking for particular entities. The service is publicly available online and has been extensively tested.

Furthermore, we got a poster paper accepted at the Poster & Demo Track.

Here is the list of the accepted poster paper with its abstract:

Abstract: In this work, we tackle the problem of generating comprehensive overviews of research findings in a structured and comparable way. To bring structure to such information and thus to enable researchers to, e.g., explore domain overviews, we present an approach for automatic unveiling of realm overviews for research artifacts (Aurora), an approach to generate overviews of research domains and their relevant artifacts. Aurora is a semi-automatic crowd-sourcing workflow that captures such information into the OpenResearch.org semantic wiki. Our evaluation confirms that Aurora, when compared to the current manual approach, reduces the effort for researchers to compile and read survey papers.

 

Acknowledgment

This work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536).

Looking forward to seeing you at The TPDL 2019.

Papers accepted at SEMANTiCS 2019

semantics-2019-rgbWe are very pleased to announce that our group got 5 papers accepted for presentation at SEMANTiCS 2019: 15th International Conference on Semantic Systems, which will be held on Sept. 09-12, 2019 Karlsruhe Germany.

SEMANTiCS is an established knowledge hub where technology professionals, industry experts, researchers and decision-makers can learn about new technologies, innovations and enterprise implementations in the fields of Linked Data and Semantic AI. Since 2005, the conference series has focused on semantic technologies, which are today together with other methodologies such as NLP and machine learning the core of intelligent systems. The conference highlights the benefits of standards-based approaches.

Here is the list of the accepted papers with their abstract:

Abstract: Over the last two decades, the amount of data which has been created, published and managed using Semantic Web standards and especially via Resource Description Framework (RDF) has been increasing. As a result, efficient processing of such big RDF datasets has become challenging. Indeed, these processes require, both efficient storage strategies and query-processing engines, to be able to scale in terms of data size. In this study, we propose a scalable approach to evaluate SPARQL queries over distributed RDF datasets using a semantic-based partition and is implemented inside the state-of-the-art RDF processing framework: SANSA. An evaluation of the performance of our approach in processing large-scale RDF datasets is also presented. The preliminary results of the conducted experiments show that our approach can scale horizontally and perform well as compared with the previous Hadoop-based system. It is also comparable with the in-memory SPARQL query evaluators when there is less shuffling involved.

Abstract: While the multilingual data on the Semantic Web grows rapidly, the building of multilingual ontologies from monolingual ones is still cumbersome and hampered due to the lack of techniques for cross-lingual ontology enrichment. Cross-lingual ontology enrichment greatly facilitates the semantic interoperability between different ontologies in different natural languages. Achieving such enrichment by human labor is very costly and error-prone. Thus, in this paper, we propose a fully automated ontology enrichment approach (OECM), which builds a multilingual ontology by enriching a monolingual ontology from another one in a different natural language, using a cross-lingual matching technique. OECM selects the best translation among all available translations of ontology concepts based on their semantic similarity with the target ontology concepts. We present a use case of our approach for enriching English Scholarly Communication Ontologies using German and Arabic ontologies from the MultiFarm benchmark. We have compared our results with the results from the Ontology Alignment Evaluation Initiative (OAEI 2018). Our approach has higher precision and recall in comparison to five state-of-the-art approaches. Additionally, we recommend some linguistic corrections in the Arabic ontologies in Multifarm which have enhanced our cross-lingual matching results.

Abstract: The disruptive potential of the upcoming digital transformations for the industrial manufacturing domain have led to several reference frameworks and numerous standardization approaches. On the other hand, the Semantic Web community has elaborated remarkable amounts of work for instance on data and service description, integration of heterogeneous sources and devices, and AI techniques in distributed systems. These two work streams are, however, mostly unrelated and only briefly regard the opposite requirements, practices and terminology. We contribute to this gap by providing the Semantic Asset Administration Shell, a RDF-based representation of the Industrie 4.0 Component. We provide an ontology for the latest data model specification, created a RML mapping, supply resources to validate the RDF entities and introduce basic reasoning on the Asset Administration Shell data model. Furthermore, we discuss the different assumptions and presentation patterns, and analyze the implications of a semantic representation on the original data. We evaluate the thereby created overheads, and conclude that the semantic lifting is manageable also for restricted or embedded devices and therefore meets the conditions of Industrie 4.0 scenarios.

Abstract: Increasing digitization leads to a constantly growing amount of data in a wide variety of application domains. Data analytics, including in particular machine learning, plays the key role to gain actionable insights from this data in a variety of domains and real-world applications. However, configuration of data analytics workflows that include heterogeneous data sources requires significant data science expertise, which hinders wide adoption of existing data analytics frameworks by non-experts. In this paper we present the Simple-ML framework that adopts semantic technologies, including in particular domain-specific semantic data models and dataset profiles, to support efficient configuration, robustness and reusability of data analytics workflows. We present semantic data models that lay the foundation for the framework development and discuss the data analytics workflows based on these models. Furthermore, we present an example instantiation of the Simple-ML data models for a real-world use case in the mobility application domain and discuss the emerging challenges.

Abstract: In the Big Data era, the amount of digital data is increasing exponentially. knowledge graphs are gaining attention to handle the variety dimension of Big Data, allowing machines to understand the semantics present in data. For example, knowledge graphs such as STITCH, SIDER, and DrugBank have been developed in the Biomedical Domain. As the number of data increases, it is critical to perform data analytics. Interaction network analysis is especially important in knowledge graphs,e.g., to detect drug-target interactions. Having a good target identification approach helps in accelerating and reducing the cost of discovering new medicines. In this work, we propose a machine learning-based approach that combines two inputs: (1) interactions and similarities among entities, and (2) translation to embeddings technique. We focus on the problem of discovering missing links in the data, called link prediction. Our approach, named SimTransE, is able to analyze the drug-target interactions and similarities. Based on this analysis, SimTransE is able to predict new drug-target interactions. We empirically evaluate Sim-TransE using existing benchmarks and evaluation protocols defined by existing state-of-the-art approaches. Our results demonstrate the good performance of SimTransE in the task of link prediction.

Furthermore, we got 2 demo/poster papers accepted at the Poster & Demo Track.

Here is the list of the accepted poster/demo papers with their abstract:

Abstract: With  the recent  trend on blockchain,  many users want to know more about the important players of the chain. In this study, we investigate  and analyze the Ethereum blockchain network in order to identify the major entities across the transaction network. By leveraging the rich data available through Alethio’s platform in the form of RDF triples we learn about the Hubs and Authorities of the Ethereum transaction network. Alethio uses SANSA for efficient reading and processing of such large-scale RDF data (transactions on Ethereum blockchain) in order to perform analytics e.g. finding top accounts, or typical behavior patterns of exchanges’ deposit wallets and more. 

Abstract: Open Data portals often struggle to provide release features (i.e., stable versioning, up-to-date download links, rich metadata descriptions) for their datasets. By this means, wide adoption of publicly available data collections is hindered, since consuming applications cannot access fresh data sources or might break due to data quality issues. While there exists a variety of tools to efficiently control release processes in software development, the management of dataset releases is not as clear. This paper proposes a deployment pipeline for efficient dataset releases that is based on automated enrichment of DCAT/DataID metadata and is a first step towards efficient deployment pipelining for Open Data publishing.  

Acknowledgment

This work was partially funded by the EU Horizon2020 projects Boost4.0 (GA no. 780732), BigDataOcean (GA no. 732310), SLIPO (GA no. 731581), QROWD (GA no. 723088), the Federal Ministry of Transport and Digital Infrastructure (BMVI) for the LIMBO project (GA no. 9F2029A and 19F2029G), and Simple-ML project.

Looking forward to seeing you at the SEMANTiCS 2019.

Paper accepted at EPIA 2019

We are very pleased to announce that our group got a paper accepted for presentation at the EPIA 2019: 19th EPIA Conference on Artificial Intelligence, which will be held on Sep 3 – 6, 2019 Vila Real, Portugal.  

The EPIA Conference on Artificial Intelligence is a well-established European conference in the field of AI. The 19th edition, EPIA 2019, will take place at UTAD University, Vila Real in September 3rd-6th, 2019. As in previous editions, this international conference is hosted with the patronage of the Portuguese Association for Artificial Intelligence (APPIA). The purpose of this conference is to promote research in all Artificial Intelligence (AI) areas, covering both theoretical/foundational issues and applications, as well as the scientific exchange among researchers, engineers and practitioners in related disciplines. 

Here is the pre-print of the accepted paper with its abstract: 

Abstract: The information on the internet suffers from noise and corrupt knowledge that may arise due to human and mechanical errors. To further exacerbate this problem, an ever-increasing amount of fake news on social media, or the internet in general, has created another challenge to drawing correct information from the web. This huge sea of data makes it difficult for human fact checkers and journalists to assess all the information manually. In recent years automated Fact-Checking has emerged as a branch of natural language processing devoted to achieving this feat. In this work, we give an overview of recent work, emphasizing on the key challenges faced during the development of such frameworks. We benchmark existing solutions to perform claim classification and introduce a new model dubbed SimpleLSTM, which outperforms baseline by 11%, 10.2% and 18.7% on FEVER-Support, FEVER-Reject, and 3-Class datasets respectively. The data, metadata, and code are released as open-source and are available at https://github.com/DeFacto/SimpleLSTM.

Acknowledgment

This work was partially funded by the European Union Marie Curie ITN Cleopatra project (GA no. 812997). 

Looking forward to seeing you at The EPIA 2019

Paper accepted at DEXA 2019

dexa2019-logoWe are very pleased to announce that our group got a paper accepted for presentation at the DEXA 2019: 30th International Conference on Database and Expert Systems Applications, which will be held on August 26 – 29, 2019, Linz, Austria.  

DEXA provides a forum to present research results and to examine advanced applications in the field. The conference and its associated workshops offer an opportunity for developers, scientists, and users to extensively discuss requirements, problems, and solutions in database, information, and knowledge systems. 

Here is the pre-print of the accepted paper with its abstract: 

Abstract: Context-specific description of entities –expressed in RDF– poses challenges during data-driven tasks, e.g., data integration, and context-aware entity matching represents a building-block for these tasks. However, existing approaches only consider inter-schema mapping of data sources, and are not able to manage several contexts during entity matching. We devise COMET, an entity matching technique that relies on both the knowledge stated in RDF vocabularies and context-based similarity metrics to match \textit{contextually equivalent} entities. COMET executes a novel 1-1 perfect matching algorithm for matching contextually equivalent entities based on the combined scores of semantic similarity and context similarity. COMET employs the Formal Concept Analysis algorithm in order to compute the context similarity of RDF entities. We empirically evaluate the performance of COMET on a testbed from DBpedia. The experimental results suggest that COMET is able to accurately match equivalent RDF graphs in a context-dependent manner.

Acknowledgment
This work was partially funded by the European project QualiChain (GA~822404).

Looking forward to seeing you at The DEXA 2019.