Papers accepted at KESW 2017

keswLogoWe are very pleased to announce that our group got 2 papers accepted for presentation at KESW 2017, which will be held on 08-10 November 2017 in Szczecin, Poland.
The International Conference on Knowledge Engineering and Semantic Web (KESW) is the top international event dedicated to discussing research results and directions in the areas related to Knowledge Representation and Reasoning, Semantic Web, and Linked Data. Its aim is to bring together researchers, practitioners, and educators, in particular from ex-USSR, Eastern and Northern Europe, to present and share ideas regarding Semantic Web, and popularize the area in these regions.

Here is the list of the accepted paper with their abstract:

Managing Lifecycle of Big Data Applications” by Ivan Ermilov, Axel-Cyrille Ngonga Ngomo, Aad Versteden, Hajira Jabeen, Gezim Sejdiu, Giorgos Argyriou, Luigi Selmi, Jürgen Jakobitsch and Jens Lehmann.

Abstract: The growing digitization and networking process within our society has a large influence on all aspects of everyday life. Large amounts of data are being produced continuously, and when these are analyzed and interlinked they have the potential to create new knowledge and intelligent solutions for economy and society. To process this data, we developed the Big Data Integrator (BDI) Platform with various Big Data components available out-of-the-box. The integration of the components inside the BDI Platform requires components homogenization, which leads to the standardization of the development process. To support these activities we created the BDI Stack Lifecycle (SL), which consists of development, packaging, composition, enhancement, deployment and monitoring steps. In this paper, we show how we support the BDI SL with the enhancement applications developed in the BDE project. As an evaluation, we demonstrate the applicability of the BDI SL on three pilots in the domains of transport, social sciences and security.


“Ontology-based Representation of Learner Profiles for Accessible OpenCourseWare Systems” by Mirette Elias, Steffen Lohmann, Sören Auer.

Abstract: The development of accessible web applications has gained significant attention over the past couple of years due to the widespread use of the Internet and the equality laws enforced by governments. Particularly in e-learning contexts, web accessibility plays an important role, as e-learning often requires to be inclusive, addressing all types of learners, including those with disabilities. However, there is still no comprehensive formal representation of learners with disabilities and their particular accessibility needs in e-learning contexts. We propose the use of ontologies to represent accessibility needs and preferences of learners in order to structure the knowledge and to access the information for recommendations and adaptations in e-learning contexts. In particular, we reused the concepts of the ACCESSIBLE ontology and extended them with concepts defined by the IMS Global Learning Consortium. We show how OpenCourseWare systems can be adapted based on this ontology to improve accessibility.


Acknowledgments
These work were supported by the European Union’s H2020 research and innovation program BigDataEurope (GA no.644564) and the European Union’s H2020 project SlideWiki (grant no. 688095).


Looking forward to seeing you at KESW 2017.

Demo and Poster papers accepted at ISWC 2017

cropped-icon_iswc-1We are very pleased to announce that our group got 6 demo/poster papers accepted for presentation at ISWC 2017, which will be held on 21-24 October in Vienna, Austria.
The International Semantic Web Conference (ISWC) is the premier international forum where Semantic Web / Linked Data researchers, practitioners, and industry specialists come together to discuss, advance, and shape the future of semantic technologies on the web, within enterprises and in the context of the public institution.

Here is the list of the accepted demo/poster papers with their abstract:

How to Revert Question Answering on Knowledge Graphs” by Gaurav Maheshwari, Mohnish Dubey, Priyansh Trivedi and Jens Lehmann.

Abstract: A large scale question answering dataset has a potential to enable development of robust and more accurate question answering systems. In this direction, we introduce a framework for creating such datasets which decreases the manual intervention and domain expertise traditionally needed. We describe in details the architecture and the design decision we took while creating the framework.


The Tale of Sansa Spark” by Ivan Ermilov, Jens Lehmann, Gezim Sejdiu, Buehmann Lorenz, Patrick Westphal, Claus Stadler, Simon Bin, Nilesh Chakraborty, Henning Petzka, Muhammad Saleem, Axel-Cyrille Ngonga Ngomo and Hajira Jabeen.

Abstract: We demonstrate the open-source Semantic Analytics Stack (SANSA), which can perform scalable analysis of large-scale knowledge graphs to facilitate applications such as link prediction, knowledge base completion and reasoning. The motivation behind this work lies in the lack of scalability of analytics methods which exploit expressive structures underlying semantically structured knowledge bases. The demonstration is based on the BigDataEurope technical platform, which utilizes Docker technology. We present various examples of using SANSA in a form of interactive Spark notebooks, which are executed using Apache Zeppelin. The technical platform and the notebooks are available on SANSA Github and can be easily deployed on any Docker-enabled host, locally or in a Docker Swarm cluster.


A Vocabulary Independent Generation Framework for DBpedia and beyond” by Ben De Meester, Anastasia Dimou, Dimitris Kontokostas, Ruben Verborgh, Jens Lehmann, Erik Mannens, Sebastian Hellmann.

Abstract: The DBpedia Extraction Framework, the generation framework behind one of the Linked Open Data cloud’s central hubs, has limitations which lead to quality issues with the DBpedia dataset. Therefore, we provide a new take on its Extraction Framework that allows for a sustainable and general-purpose Linked Data generation framework by adapting a semantic-driven approach. The proposed approach decouples, in a declarative manner, the extraction, transformation, and mapping rules execution. This way, among others, interchanging different schema annotations is supported, instead of being coupled to a certain ontology as it is now, because the DBpedia Extraction Framework allows only generating a certain dataset with a single semantic representation. In this paper, we shed more light to the added value that this aspect brings. We provide an extracted DBpedia dataset using a different vocabulary, and give users the opportunity to generate a new dbpedia dataset using a custom combination of vocabularies.


Benchmarking RDF Storage Solutions with IGUANA” by Felix Conrads, Jens Lehmann, Muhammad Saleem and Axel-Cyrille Ngonga Ngomo.

Abstract: Choosing the right RDF storage storage is of central importance when developing any data-driven Semantic Web solution. In this demonstration paper, we present the configuration and use of the IGUANA benchmarking framework. This framework addresses a crucial drawback of state-of-the-art benchmarks: While several benchmarks have been proposed that assess the performance of triple stores, an integrated benchmark-independent execution framework for these benchmarks was yet to be made available. IGUANA addresses this research by providing an integrated and highly configurable environment for the execution of SPARQL benchmarks. Our framework complements benchmarks by providing an execution environment which can measure the performance of triple stores during data loading, data updates as well as under different loads and parallel requests. Moreover, it allows a uniform comparison of results on different benchmarks. During the demonstration, we will execute the DBPSB benchmark using the IGUANA framework and show how our framework measures the performance of popular triple stores under updates and parallel user requests. IGUANA is open-source and can be found at http://iguana-benchmark.eu/.


BatWAn – A Binary and Multi-way Query Plan Analyzer” by Mikhail Galkin, Maria-Esther Vidal.

Abstract: The majority of existing SPARQL query engines generate query plans composed of binary join operators. Albeit effective, binary joins can drastically impact on the performance of query processing whenever source answers need to be passed through multiple operators in a query plan. Multi-way joins have been proposed to overcome this problem; they are able to propagate and generate results in a single step during query execution. We demonstrate the benefits of query plans with multi-way operators with BatWAn, a binary and multi-way query plan analyzer. Attendees will observe the behavior of multi-way joins on queries of different selectivity, as well as the impact on total execution time, time for the first answer, and continuous results yield over time.


QAESTRO – Semantic Composition of QA Pipelines” by Kuldeep Singh, Ioanna Lytra, Kunwar Abhinav Aditya, Maria-Esther Vidal.

Abstract: Many question answering systems and related components have been developed in recent years. Since question answering involves several tasks and subtasks, common in many systems, existing components can be combined in various ways to build tailored question answering pipelines. QAESTRO provides the tools to semantically describe question answering components and automatically generate possible pipelines given developer requirements. We demonstrate the functionality of QAESTRO for building question answering pipelines including different tasks and components. Attendees will be able to semantically describe question answering pipelines and integrate them in existing frameworks.


Acknowledgments
These work were supported by the European Union’s H2020 research and innovation action HOBBIT (GA no. 688227), the European Union’s H2020 research and innovation program BigDataEurope (GA no.644564), DAAD Scholarship and   WDAqua : Marie Skłodowska-Curie Innovative Training Network.


Looking forward to seeing you at ISWC 2017

Papers accepted at ISWC 2017

cropped-icon_iswc-1We are very pleased to announce that our group got 6 papers accepted for presentation at ISWC 2017, which will be held on 21-24 October in Vienna, Austria.
The International Semantic Web Conference (ISWC) is the premier international forum where Semantic Web / Linked Data researchers, practitioners, and industry specialists come together to discuss, advance, and shape the future of semantic technologies on the web, within enterprises and in the context of the public institution.

Here is the list of the accepted paper with their abstract:

Distributed Semantic Analytics using the SANSA Stack” by Jens Lehmann, Gezim Sejdiu, Lorenz Bühmann, Patrick Westphal, Claus Stadler, Ivan Ermilov, Simon Bin, Muhammad Saleem, Axel-Cyrille Ngonga Ngomo and Hajira Jabeen.

Abstract: A major research challenge is to perform scalable analysis of large-scale knowledge graphs to facilitate applications like link prediction, knowledge base  completion and reasoning. Analytics methods which exploit expressive structures usually do not scale well to very large knowledge bases, and most analytics approaches which do scale horizontally (i.e., can be executed in a distributed environment) work on simple feature-vector-based input. This software framework paper describes the ongoing Semantic Analytics Stack (SANSA) project, which supports expressive and scalable semantic analytics by providing functionality for distributed computing on RDF data.


A Corpus for Complex Question Answering over Knowledge Graphs” by Priyansh Trivedi, Gaurav Maheshwari, Mohnish Dubey and Jens Lehmann.

Abstract: Being able to access knowledge bases in an intuitive way has been an active area of research over the past years. In particular, several question answering (QA) approaches which allow to query RDF datasets in natural language have been developed as they allow end users to access knowledge without needing to learn the schema of a knowledge base and learn a formal query language. To foster this research area, several training datasets have been created, e.g.~in the QALD (Question Answering over Linked Data) initiative. However, existing datasets are insufficient in terms of size, variety or complexity to apply and evaluate a range of machine learning based QA approaches for learning complex SPARQL queries. With the provision of the Large-Scale Complex Question Answering Dataset (LC-QuAD), we close this gap by providing a dataset with 5000 questions and their corresponding SPARQL queries over the DBpedia dataset.In this article, we describe the dataset creation process and how we ensure a high variety of questions, which should enable to assess the robustness and accuracy of the next generation of QA systems for knowledge graphs.


Iguana : A Generic Framework for Benchmarking the Read-Write Performance of Triple Stores” by Felix Conrads, Jens Lehmann, Axel-Cyrille Ngonga Ngomo, Muhammad Saleem and Mohamed Morsey.

Abstract  : The performance of triples stores is crucial for applications which rely on RDF data. Several benchmarks have been proposed that assess the performance of triple stores. However, no integrated benchmark-independent execution framework for these benchmarks has been provided so far. We propose a novel SPARQL benchmark execution framework called IGUANA. Our framework complements benchmarks by providing an execution environment which can measure the performance of triple stores during data loading, data updates as well as under different loads. Moreover, it allows a uniform comparison of results on different benchmarks. We execute the FEASIBLE and DBPSB benchmarks using the IGUANA framework and measure the performance of popular triple stores under updates and parallel user requests. We compare our results with state-of-the-art benchmarking results and show that our benchmark execution framework can unveil new insights pertaining to the performance of triple stores.


Sustainable Linked Data generation: the case of DBpedia” by Wouter Maroy, Anastasia Dimou, Dimitris Kontokostas, Ben De Meester, Jens Lehmann, Erik Mannens and Sebastian Hellmann.

Abstract : DBpedia EF, the generation framework behind one of the Linked Open Data cloud’s central interlinking hubs, has limitations regarding the quality, coverage and sustainability of the generated dataset. Hence, DBpedia can be further improved both on schema and data level. Errors and inconsistencies can be addressed by amending (i) the DBpediaEF; (ii) the DBpedia mapping rules; or (iii) Wikipedia itself. However, even though the DBpedia ef is continuously evolving and several changes were applied to both the DBpedia EF and mapping rules, there are no significant improvements on the DBpedia dataset since the identification of its limitations. To address these shortcomings, we propose adapting a different semantic-driven approach that decouples, in a declarative manner, the extraction, transformation and mapping rules execution. In this paper, we provide details regarding the new DBpedia EF, its architecture, technical implementation and extraction results. This way, we achieve an enhanced data generation process for DBpedia, which can be broadly adopted, that improves its quality, coverage and sustainability.


Realizing an RDF-based Information Model for a Manufacturing Company – A Case Study” by Niklas Petersen, Lavdim Halilaj, Irlán Grangel-González, Steffen Lohmann, Christoph Lange and Sören Auer.

Abstract: The digitization of the industry requires information models describing assets and information sources of companies to enable the semantic integration and interoperable exchange of data. We report on a case study in which we realized such an information model for a global manufacturing company using semantic technologies. The information model is centered around machine data and describes all relevant assets, key terms and relations in a structured way, making use of existing as well as newly developed RDF vocabularies. In addition, it comprises numerous RML mappings that link different data sources required for integrated data access and querying via SPARQL. The technical infrastructure and methodology used to develop and maintain the information model is based on a Git repository and utilizes the development environment VoCol as well as the Ontop framework for Ontology Based Data Access. Two use cases demonstrate the benefits and opportunities provided by the information model. We evaluated the approach with stakeholders and report on lessons learned from the case study.


Diefficiency Metrics: Measuring the Continuous Efficiency of Query Processing Approaches” by Maribel Acosta, Maria-Esther Vidal, York Sure-Vetter.

Abstract: During empirical evaluations of query processing techniques, metrics like execution time, time for the first answer, and throughput are usually reported. Albeit informative, these metrics are unable to quantify and evaluate the efficiency of a query engine over a certain time period -or diefficiency-, thus hampering the distinction of cutting-edge engines able to exhibit high-performance gradually. We tackle this issue and devise two experimental metrics named dief@t and dief@k, which allow for measuring the diefficiency during an elapsed time period or while k answers are produced, respectively. The dief@t and dief@k measurement methods rely on the computation of the area under the curve of answer traces and thus capturing the answer concentration over a time interval. We report experimental results of evaluating the behavior of a generic SPARQL query engine using both metrics. Observed results suggest that dief@t and dief@k are able to measure the performance of SPARQL query engines based on both the amount of answers produced by an engine and the time required to generate these answers.


Acknowledgments
These work were supported by the European Union’s H2020 research and innovation action HOBBIT (GA no. 688227), the European Union’s H2020 research and innovation program BigDataEurope (GA no.644564), German Ministry BMWI under the SAKE project (Grant No. 01MD15006E), WDAqua : Marie Skłodowska-Curie Innovative Training Network and Industrial Data Space.


Looking forward to seeing you at ISWC 2017

Paper accepted at WI 2017

logo-wi2017-150x150We are very pleased to announce that our group got a paper accepted for presentation at The International Conference on Web Intelligence (WI), which will be held in Leipzig between the 23th – 26th of August. The WI is an important international forum for research advances in theories and methods usually associated with Collective Intelligence, Data Science, Human-Centric Computing, Knowledge Management, and Network Science.

“LOG4MEX: A Library to Export Machine Learning Experiments” by Diego Esteves, Diego Moussallem, Tommaso Soru, Ciro Baron Neto, Jens Lehmann, Axel-Cyrille Ngonga Ngomo and Julio Cesar Duarte.

Abstract: A choice of the best computational solution for a particular task is increasingly reliant on experimentation. Even though experiments are often described through text, tables, and figures, their descriptions are often incomplete or confusing. Thus, researchers often have to perform lengthy web searches for reproducing and understanding the results. In order to minimize this gap, vocabularies and ontologies have been proposed for representing data mining and machine learning (ML) experiments. However, we still lack proper tools to export properly these metadata. To this end, we present an open-source library dubbed LOG4MEX which aims at supporting the scientific community to fulfill this gap.

Acknowledgments
This work is supported by the European Union’s H2020 research and innovation action HOBBIT (GA no. 688227) and the European Union’s H2020 research and innovation program BigDataEurope. (GA no.644564).

Looking forward to seeing you at WI 2017. More information on the program can be found here.

SDA @WWW2017

wwwlogoThe WWW conference is an important international forum for the evolution of the web, technical standards, the impact of the web on society, and its future. Our members have actively participated in the 26th International World Wide Web Conference (WWW 2017), which took place on the sunny shores of Perth, Western Australia /3-7 April 2017.

We are very pleased to report that:

A paper from our group was accepted for presentation as full research paper at WWW 2017:

A 10thWorkshop on Linked Data on the Web (LDOW2017) was organized by Sören Auer, Sir Tim Berners-Lee, Christian Bizer, Sarven Capadisli, Tom Heath, Krzysztof Janowicz, Jens Lehmann, Hideaki Tacked

The Web is developing from a medium for publishing textual documents into a medium for sharing structured data. This trend is fuelled by the adoption of Linked Data principles by a growing number of data providers and the increasing trend to include semantic markup of content of HTML pages. LDOW2017 aims to stimulate discussion and further research into the challenges of publishing, consuming, and integrating structured data from the Web as well as mining knowledge from the global Web of Data.

The audience showed high interest for the workshop.

Following discussion included further challenges on Pioneering the Linked Open Research Cloud and The Future of Linked Data.

WWW2017 was a great venue to meet the community, create new connections, talk about current research challenges, share ideas and settle new collaborations. We look forward to the next WWW conferences.

 

Until then, meet us at sda !

Invited talk by Anisa Rula

photoAnisaOn Tuesday, 28th Dr. Anisa Rula, a postdoctoral researcher at The University of Milano-Bicocca visited SDA and gave a talk entitled “Enriching Knowledge Bases through Quality Assessment”.

Anisa presented a talk in the context of quality dimensions and their evolution and how the anatomy of data representation and the quality assessment in Knowledge Bases (KBs) could lead to the improvement of existing KBs, i.e., by providing an enrichment of KBs. The trade-off between the enrichment and quality of KGs were risen up and discussed in details. Some of the use cases were mentioned as well, with the main focus on Link Discovery. In particular, enriching KBs will help in better interlinking by eliminating noise and search space.
During the talk, she also introduced ABSTAT, an ontology-driven linked data summarization framework that generates summaries of Linked Data datasets that comprises a set of Abstract Knowledge patterns, statistics, and a subtype graph.

Prof. Dr. Jens Lehmann invited the speaker to the bi-weekly “SDA colloquium presentations”, so there was good representation from various students and researchers from our group.
The Slides of the talk of our invited speaker Anisa Rula were inspired by “Data Quality Issues in Linked Open Data”, a chapter of the book “Data and Information Quality by Carlo Batini and Monica Scannapieco.

With this visit, we expect to strengthen our research collaboration networks with the Department of Computer Science, Systems and Communication, University of Milan-Bicocca, mainly on combining quality assessment metrics and distributed frameworks applied on SANSA

Paper accepted at WWW 2017

wwwlogoWe are very pleased to announce that our group got a paper accepted for presentation at the 26th International World Wide Web Conference (WWW 2017), which will be held on the sunny shores of Perth, Western Australia /3-7 April, 2017. The WWW is an important international forum for the evolution of the web, technical standards, the impact of the web on society, and its future.

Neural Network-based Question Answering over Knowledge Graphs on Word and Character Levelby Denis Lukovnikov, Asja Fischer, Soeren Auer, and Jens Lehmann.

Abstract: Question Answering (QA) systems over Knowledge Graphs (KG) automatically answer natural language questions using facts contained in a knowledge graph. Simple questions, which can be answered by the extraction of a single fact, constitute a large part of questions asked on the web but still pose challenges to QA systems, especially when asked against a large knowledge resource. Existing QA systems usually rely on various components each specialised in solving different sub-tasks of the problem (such as segmentation, entity recognition, disambiguation, and relation classification etc.). In this work, we follow a quite different approach: We train a neural network for answering simple questions in an end-to-end manner, leaving all decisions to the model. It learns to rank subject-predicate pairs to enable the retrieval of relevant facts given a question. The network contains a nested word/character-level question encoder which allows to handle out-of-vocabulary and rare word problems while still being able to exploit word-level semantics. Our approach achieves results competitive with state-of-the-art end-to-end approaches that rely on an attention mechanism.

Acknowledgments
This work is supported in part by the European Union under the Horizon 2020 Framework Program for the project WDAqua (GA 642795).

Looking forward to seeing you at WWW.

Paper accepted at ICWE 2017

ICWE2017We are very pleased to announce that our group got a paper accepted for presentation at the 17th International Conference on Web Engineering (ICWE 2017 ), which will be held on 5 – 8 June 2017 / Rome Italy. The ICWE is an important international forum for the Web Engineering Community.

The BigDataEurope Platform – Supporting the Variety Dimension of Big Data Sören Auer, Simon Scerri, Aad Versteden, Erika Pauwels, Angelos Charalambidis, Stasinos Konstantopoulos, Jens Lehmann, Hajira Jabeen, Ivan Ermilov, Gezim Sejdiu, Andreas Ikonomopoulos, Spyros Andronopoulos, Mandy Vlachogiannis, Charalambos Pappas, Athanasios Davettas, Iraklis A. Klampanos, Efstathios Grigoropoulos, Vangelis Karkaletsis, Victor de Boer, Ronald Siebes, Mohamed Nadjib Mami, Sergio Albani, Michele Lazzarini, Paulo Nunes, Emanuele Angiuli, Nikiforos Pittaras, George Giannakopoulos, Giorgos Argyriou, George Stamoulis, George Papadakis, Manolis Koubarakis, Pythagoras Karampiperis, Axel-Cyrille Ngonga Ngomo, Maria-Esther Vidal.

Abstract: The management and analysis of large-scale datasets – described with the term Big Data – involves the three classic dimensions volume, velocity and variety. While the former two are well supported by a plethora of software components, the variety dimension is still rather neglected. We present the BDE platform – an easy-to-deploy, easy-to-use and adaptable (cluster-based and standalone) platform for the execution of big data components and tools like Hadoop, Spark, Flink, Flume and Cassandra. The BDE platform was designed based upon the requirements gathered from seven of the societal challenges put forward by the European Commission in the Horizon 2020 programme and targeted by the BigDataEurope pilots. As a result, the BDE platform allows to perform a variety of Big Data flow tasks like message passing, storage, analysis or publishing. To facilitate the processing of heterogeneous data, a particular innovation of the platform is the Semantic Layer, which allows to directly process RDF data and to map and transform arbitrary data into RDF. The advantages of the BDE platform are demonstrated through seven pilots, each focusing on a major societal challenge.

Acknowledgments
This work is supported by the European Union’s Horizon 2020 research and innovation program under grant agreement no.644564 – BigDataEurope.

“AskNow: A Framework for Natural Language Query Formalization in SPARQL” elected as Paper of the month

20170306_104516We are very pleased to announce that our paper “AskNow: A Framework for Natural Language Query Formalization in SPARQLby Mohnish Dubey, Sourish Dasgupta, Ankit Sharma, Konrad Höffner, Jens Lehmann has been elected as the Paper of the month at Fraunhofer IAIS. This award is given to publications that have a high innovation impact in the research field after a committee evaluation.

This research paper has been accepted on ESWC 2016 main conference and its core work of Natural Language Query Formalization in SPARQL is based on AskNow Project.

Abstract: Natural Language Query Formalization involves semantically parsing queries in natural language and translating them into their corresponding formal representations. It is a key component for developing question-answering (QA) systems on RDF data. The chosen formal representation language in this case is often SPARQL. In this paper, we propose a framework, called AskNow, where users can pose queries in English to a target RDF knowledge base (e.g. DBpedia), which are first normalized into an intermediary canonical syntactic form, called Normalized Query Structure (NQS), and then translated into SPARQL queries. NQS facilitates the identification of the desire (or expected output information) and the user-provided input information, and establishing their mutual semantic relationship. At the same time, it is sufficiently adaptive to query paraphrasing. We have empirically evaluated the framework with respect to the syntactic robustness of NQS and semantic accuracy of the SPARQL translator on standard benchmark datasets.

The paper and authors were honored for this publication in a special event at Fraunhofer Schloss Birlinghoven, Sankt Augustin, Germany.

 

Invited talk by Paul Groth

paulOn Tuesday, 7th February, Paul Groth from Elsevier Labs visited SDA and gave a talk entitled “Applying Knowledge Graphs”.

Paul presented a talk in the context of building large knowledge graphs at Elsevier. He gave a great talk on how to motivate the need for Knowledge Graph observatories in order to provide empirical evidence for how to deal with changing over Knowledge Bases.

 

The talk was invited from Prof. Dr. Jens Lehmann on “Knowledge Graph Analysis” lectures so there was good representation from various students and researchers from SDA and EIS group.

The Slides of the talk of our invited speaker Paul Groth can be found here:

With this visit, we expect to strengthen our research collaboration networks with Elsevier Labs, mainly on combining semantics and distributed machine learning applied on SANSA.