Paper accepted at ICEGOV 2018

icegov_2018

We are very pleased to announce that our group got a paper accepted for presentation at the 11th International Conferences on Theory and Practice of Electronic Governance  (ICEGOV) 2018, which will be held on April 4 – 6, 2018 in Galway, Ireland.

The conference focuses on the use of technology to transform the working of government and its relationships with citizens, businesses, and other non-state actors in order to improve public governance and its contribution to public policy and development (EGOV). It also promotes the interaction and cooperation between universities, research centres, governments, industries, and non-governmental organizations needed to develop the EGOV community. It is supported by a rich program of keynote lectures, plenary sessions, papers presentations within the thematic sessions, invited sessions, and networking sessions.

Here is the accepted paper with its abstract:

Classifying Data Heterogeneity within Budget and Spending Open Data” by Fathoni A. Musyaffa, Fabrizio Orlandi, Hajira Jabeen, and Maria-Esther Vidal.

Abstract: Heterogeneity problems within open budgets and spending datasets hinder effective analysis and consumption of these datasets. To understand detailed types of heterogeneities available within open budgets and spending datasets, we analyzed more than 75 datasets from different levels of public administrations. We classified and enumerated these heterogeneities, and see if the heterogeneities found can be represented using state-of-the-art data models designed for representing open budgets and spending data. In the end, lessons learned are provided for public administrators, technical and scientific communities.

Acknowledgments
This part of work is supported by DAAD and partially by EU H2020 project no. 645833 (OpenBudgets.eu).


Looking forward to seeing you at ICEGOV2018.

Papers accepted at ICSC 2018

ICSC2018We are very pleased to announce that we got 3 papers accepted at ICSC 2018 for presentation at the main conference, which will be held on Jan 31 – Feb 2 ,2018,  California, United States.

The 12th IEEE International Conference on Semantic Computing (ICSC2018) Semantic Computing (SC) addresses the derivation, description, integration, and use of semantics (“meaning”, “context”, “intention”) for all types of resource including data, document, tool, device, process and people. The scope of SC includes, but is not limited to, analytics, semantics description languages and integration (of data and services), interfaces, and applications including biomed, IoT, cloud computing, software-defined networks, wearable computing, context awareness, mobile computing, search engines, question answering, big data, multimedia, and services.

Here is the list of the accepted paper with their abstract:

“SAANSET: Semi-Automated Acquisition of Scholarly Metaadata using OpenResearch.org Platform” by Rebaz Omar, Sahar Vahdati, Christoph Lange, Maria-Esther Vidal and Andreas Behrend

Abstract: Researchers spend a lot of time in finding information about people, events, journals, and research areas related to topics of their interest. Digital libraries and digital scholarly repositories usually offer services to assist researchers in this task. However, every research community has its own way of distributing scholarly metadata.
Mailing lists provide an instantaneous channel and are often used for discussing topics of interest to a community of researchers, or to announce important information — albeit in an unstructured way. To bring structure specifically into the announcements of events and thus to enable researchers to, e.g., filter them by relevance, we present a semi-automatic crowd-sourcing workflow that captures metadata of events from call-for-papers emails into the OpenResearch.org semantic wiki. Evaluations confirm that our approach reduces a high number of actions that researchers should do manually to trace the call for papers received via mailing lists.


“Semantic Enrichment of IoT Stream Data On-Demand” by Farah Karim, Ola Al Naameh, Ioanna Lytra, Christian Mader, Maria-Esther Vidal, and Sören Auer

Abstract: Connecting the physical world to the Internet of Things (IoT) allows for the development of a wide variety of applications. Things can be searched, managed, analyzed, and even included in collaborative games.
Industries, health care, and cities are exploiting IoT data-driven frameworks to make these organizations more efficient, thus, improving the lives of citizens. For making IoT a reality, data produced by sensors, smart phones, watches, and other wearables need to be integrated; moreover, the meaning of IoT data should be explicitly represented. However, the Big Data nature of IoT data imposes challenges that need to be addressed in order to provide scalable and efficient IoT data-driven infrastructures. We tackle these issues and focus on the problems of describing the meaning of IoT streaming data using ontologies and integrating this data in a knowledge graph.
We devise DESERT, a SPARQL query engine able to on-Demand factorizE and Semantically Enrich stReam daTa in a knowledge graph.
Resulting knowledge graphs model the semantics or meaning of merged data in terms of entities that satisfy the SPARQL queries and relationships among those entities; thus, only data required for query answering is included in the knowledge graph.
We empirically evaluate the results of DESERT on SRBench, a benchmark of Streaming RDF data.
The experimental results suggest that DESERT allows for speeding up query execution while the size of the knowledge graphs remains relatively low.


 

“Shipping Knowledge Graph Management Capabilities to Data Providers and Consumers” by Omar Al-Safi, Christian Mader, Ioanna Lytra, Mikhail Galkin, Kemele Endris, Maria-Esther Vidal, and Sören Auer

Abstract: The amount of Linked Data both open, made available on the Web, and private, exchanged across companies and organizations, have been increasing in recent years. This data can be distributed in form of Knowledge Graphs (KGs), but maintaining these KGs is mainly the responsibility of data owners or providers. Moreover, building applications on top of KGs in order to provide, for instance, analytics, data access control, and privacy is left to the end user or data consumers. However, many resources in terms of development costs and equipment are required by both data providers and consumers, thus impeding the development of real-world applications over KGs. We propose to encapsulate KGs as well as data processing functionalities in a client-side system called Knowledge Graph Container, intended to be used by data providers or data consumers. Knowledge Graph Containers can be tailored to the target environments, ranging from Big Data to light-weight platforms. We empirically evaluate the performance and scalability of Knowledge Graph Containers with respect to state-of-the-art Linked Data management approaches. Observed results suggest that Knowledge Graph Containers increase the availability of Linked Data, as well as efficiency and scalability of various Knowledge Graph management tasks.

 


Acknowledgments
These work were supported by the European Union’s H2020 research and innovation program BigDataEurope (GA no. 644564), WDAqua : Marie Skłodowska-Curie Innovative Training Network (GA no. 642795), InDaSpace :  a German Ministry for Finances and Energy research grand, DAAD Scholarship, the European Commission with a grant for the H2020 project OpenAIRE2020 (GA no. 643410) , OpenBudgets.eu (GA no. 645833) and by the European Union’s Horizon 2020 IoT European Platform Initiative (IoT-EPI) BioTope (GA No 688203).


Looking forward to seeing you at ICSC 2018.

Prof.dr.ir. Wil van der Aalst visits SDA

WvdA-BvO-24059(3256x1832)

Prof.dr.ir. Wil van der Aalst from Technische Universiteit Eindhoven (TU/e) was visiting the SDA group on the 29th of November 2017.

Prof.dr.ir. Wil van der Aalst is a distinguished university professor at the Technische Universiteit Eindhoven (TU/e) where he is also the scientific director of the Data Science Center Eindhoven (DSC/e). Since 2003 he holds a part-time position at Queensland University of Technology (QUT). Currently, he is also a visiting researcher at Fondazione Bruno Kessler (FBK) in Trento and a member of the Board of Governors of Tilburg University. His personal research interests include process mining, Petri nets, business process management, workflow management, process modeling, and process analysis. Wil van der Aalst has published over 200 journal papers, 20 books (as author or editor), 450 refereed conference/workshop publications, and 65 book chapters. Many of his papers are highly cited (he one of the most cited computer scientists in the world; according to Google Scholar, he has an H-index of 135 and has been cited 80,000 times) and his ideas have influenced researchers, software developers, and standardization committees working on process support. Next to serving on the editorial boards of over 10 scientific journals he is also playing an advisory role for several companies, including Fluxicon, Celonis, and ProcessGold. Van der Aalst received honorary degrees from the Moscow Higher School of Economics (Prof. h.c.), Tsinghua University, and Hasselt University (Dr. h.c.). He is also an elected member of the Royal Netherlands Academy of Arts and Sciences, the Royal Holland Society of Sciences and Humanities, and the Academy of Europe. Recently, he was awarded with a Humboldt Professorship, Germany’s most valuable research award (five million euros), and will move to RWTH Aachen University at the beginning of 2018.

Prof. Jens Lehmann invited the speaker to the bi-weekly “SDA colloquium presentations”. 40-50 researchers and students from SDA attended. The goal of his visit was to exchange experience and ideas on semantic web techniques specialized for process mining, including process modeling, classifications algorithms and many more. Apart from presenting various use cases where process mining has helped scientists to get useful insights from row data, Prof.dr.ir. Wil van der Aalst shared with our group future research problems and challenges related to this research area and gave a talk on “Learning Hybrid Process Models from Events: Process Mining for the Real World (Slides)”

Abstract: Process mining provides new ways to utilize the abundance of event data in our society. This emerging scientific discipline can be viewed as a bridge between data science and process science: It is both data-driven and process-centric. Process mining provides a novel set of techniques to discover the real processes. These discovery techniques return process models that are either formal (precisely describing the possible behaviors) or informal (merely a “picture” not allowing for any form of formal reasoning). Formal models are able to classify traces (i.e., sequences of events) as fitting or non-fitting. Most process mining approaches described in the literature produce such models. This is in stark contrast with the over 25 available commercial process mining tools that only discover informal process models that remain deliberately vague on the precise set of possible traces. There are two main reasons why vendors resort to such models: scalability and simplicity. 

In this talk, prof. Van der Aalst will propose to combine the best of both worlds: discovering hybrid process models that have formal and informal elements. The discovered models allow for formal reasoning, but also reveal information that cannot be captured in mainstream formal models. A novel discovery algorithm returning hybrid Petri nets has been implemented in ProM and will serve as an example for the next wave of commercial process mining tools. Prof. Van der Aalst will also elaborate on his collaboration with industry. His research group at TU/e applied process mining in over 150 organizations, developed the open-source tool ProM, and influenced the 20+ commercial process mining tools available today.

During the meeting, SDA core research topics and main research projects were presented and try to find an intersection on the future collaborations with Prof. Van der Aalst  and his research group.

As an outcome of this visit, we expect to strengthen our research collaboration networks with TU/e and in the future with RWTH Aachen University, mainly on combining semantic knowledge and distributed computing and analytics.

Paper accepted at IEEE BigData 2017

IEEE-BIG-DATA17_BOSTONWe are very pleased to announce that our group got a paper accepted for presentation at IEEE BigData 2017, which will be held on December 11th-14th, 2017, Boston, MA, United States.

 
In recent years, “Big Data” has become a new ubiquitous term. Big Data is transforming science, engineering, medicine, healthcare, finance, business, and ultimately our society itself. The IEEE Big Data conference series started in 2013 has established itself as the top tier research conference in Big Data.
The 2017 IEEE International Conference on Big Data (IEEE Big Data 2017) will provide a leading forum for disseminating the latest results in Big Data Research, Development, and Applications.

Implementing Scalable Structured Machine Learning for Big Data in the SAKE Project” by Simon Bin, Patrick Westphal, Jens Lehmann, and Axel-Cyrille Ngomo Ngonga.

Abstract: Exploration and analysis of large amounts of machine generated data requires innovative approaches. We propose a combination of Semantic Web and Machine Learning to facilitate the analysis. First, data is collected and converted to RDF according to a schema in the Web Ontology Language OWL. Several components can continue working with the data, to interlink, label, augment, or classify. The size of the data poses new challenges to existing solutions, which we solve in this contribution by transitioning from in-memory to database.


Acknowledgments
This work was supported in part by a research grant from the German Ministry for Finances and Energy under the SAKE project (Grant agreement No. 01MD15006E) and by a research grant from the European Union’s Horizon 2020 research and innovation programme under the SLIPO project (Grant agreement No. 731581).


Looking forward to seeing you at IEEE BigData 2017.

“A Corpus for Complex Question Answering over Knowledge Graphs” elected as Paper of the month at FraunhoferIAIS

DOMfM_yX0AAQopVWe are very pleased to announce that our paper “A Corpus for Complex Question Answering over Knowledge Graphs” by Priyansh TrivediGaurav MaheshwariMohnish Dubey and Jens Lehmann has been elected as the Paper of the month at Fraunhofer IAIS. This award is given to publications that have a high innovation impact in the research field after a committee evaluation.

This research paper has been accepted on ISWC 2017 main conference and the paper presents a large gold standard Question Answering Dataset over DBpedia, and the accompanying framework to make the dataset. This is the largest QA dataset having 5000 questions, and their corresponding SPARQL query. This paper was nominated for the “Best Student Paper Award” in the resource track.

Abstract: Being able to access knowledge bases in an intuitive way has been an active area of research over the past years. In particular, several question answering (QA) approaches which allow to query RDF datasets in natural language have been developed as they allow end users to access knowledge without needing to learn the schema of a knowledge base and learn a formal query language. To foster this research area, several training datasets have been created, e.g.~in the QALD (Question Answering over Linked Data) initiative. However, existing datasets are insufficient in terms of size, variety or complexity to apply and evaluate a range of machine learning based QA approaches for learning complex SPARQL queries. With the provision of the Large-Scale Complex Question Answering Dataset (LC-QuAD), we close this gap by providing a dataset with 5000 questions and their corresponding SPARQL queries over the DBpedia dataset.In this article, we describe the dataset creation process and how we ensure a high variety of questions, which should enable to assess the robustness and accuracy of the next generation of QA systems for knowledge graphs.

The paper and authors were honored for this publication in a special event at Fraunhofer Schloss Birlinghoven, Sankt Augustin, Germany.

 

SDA at ISWC 2017 – A Ten-Year Best Paper and a Demo Award

cropped-icon_iswc-1

The International Semantic Web Conference (ISWC) is the premier international forum where Semantic Web / Linked Data researchers, practitioners, and industry specialists come together to discuss, advance, and shape the future of semantic technologies on the web, within enterprises and in the context of the public institution.

 We are very pleased to announce that we got 6 papers accepted at ISWC 2017 for presentation at the main conference. Additionally, we also had 6 Posters/Demo papers accepted.

Furthermore, we are happy to win the SWSA Ten-Year Best Paper Award, which recognizes the highest impact papers from the 6th International Semantic Web Conference in Busan, Korea in 2007.
Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, Zachary G. Ives. DBpedia: A Nucleus for a Web of Open Data
Slides: https://www.slideshare.net/soeren1611/dbpedia-10-year-iswc-swsa-best-paper-award-presentation-81098293

 

In addition to this award, we are very happy to announce that we won the Best Demo Award for the SANSA Notebooks:
The Tale of Sansa Spark” by Ivan Ermilov, Jens Lehmann, Gezim Sejdiu, Buehmann Lorenz, Patrick Westphal, Claus Stadler, Simon Bin, Nilesh Chakraborty, Henning Petzka, Muhammad Saleem, Axel-Cyrille Ngonga Ngomo and Hajira Jabeen.

Here are some further pointers in case you want to know more about SANSA:

The audience displayed enthusiasm during the demonstration appreciating the work and asking questions regarding the future of SANSA, technical details and possible synergy with industrial partners and projects. Gezim Sejdiu and Jens Lehmann, who were presenting the demo, were talking 3+ hours non-stop (without even time to eat 😉 ).

Among the other presentations, our colleagues presented the following presentations:

Workshops

ISWC17 was a great venue to meet the community, create new connections, talk about current research challenges, share ideas and settle new collaborations. We look forward to the next ISWC conference.

Until then, meet us at SDA !

Dr. Maria Maleshkova visits SDA

MariaDr. Maria Maleshkova from Karlsruhe Institute of Technology (KIT) was visiting the SDA group on the 11th of October 2017.

Maria Maleshkova is a postdoctoral researcher at the Karlsruhe Service Research Institute (KSRI) and the Institute of Applied Informatics and Formal Description Methods (AIFB) at the Karlsruhe Institute of Technology. Her research work covers the Web of Things (WoT) and semantics-based data integration topics as well as work in the area of the semantic description of Web APIs, RESTful services and their joint use with Linked Data. Prior to that, she was a Research Associate and a PhD student at the Knowledge Media Institute (KMi) at the Open University, where she worked on projects in the domain of SOA and Web Services.

Prof. Jens Lehmann invited the speaker to the bi-weekly “SDA colloquium presentations”. 40-50 researchers and students from SDA attended. The goal of her visit was to exchange experience and ideas on semantic web techniques specialized for smart services, including Internet of Things, Industry 4.0 technologies and many more. Apart from presenting various use cases where smart services have helped scientists to get useful insights from sensor data, Dr. Maleshkova shared with our group future research problems and challenges related to this research area and shown that what is so smart about Smart Services?

 
As an outcome of this visit, we expect to strengthen our research collaboration networks with KIT, mainly on combining semantic knowledge and distributed analytics applied on SANSA.

Luís Garcia received the prize of best PhD thesis

logo-capesWe are very pleased to announce that Dr. Luis Paulo Faina Garcia, a researcher from SDA received the prize for the best PhD thesis in Computer Science in 2016 from the Brazilian Government Council. The title of his thesis is “Noise Detection in Classification Problems” with the supervision of Prof. Dr. André de Carvalho from the University of São Paulo. In 2017 his thesis was also selected between the best thesis by the Brazilian Computer Science Society.

The main contributions of his work improved the accuracy in a Machine Learning system based on noise detection to predict non-native species in protected areas of the Brazilian state of São Paulo. The results obtained were several publications in good conferences and and high-quality journals.  

 

Short Abstract: Large volumes of data have been produced in many application domains. Nonetheless, when data quality is low, the performance of Machine Learning techniques is harmed. Real data are frequently affected by the presence of noise, which, when used in the training of Machine Learning techniques for predictive tasks, can result in complex models, with high induction time and low predictive performance. Identification and removal of noise can improve data quality and, as a result, the induced model. This thesis proposes new techniques for noise detection and the development of a recommendation system based on meta-learning to recommend the most suitable filter for new tasks. Experiments using artificial and real datasets show the relevance of this research.

 

Prof. Manolis Koubarakis visit SDA

manolisProf. Manolis Koubarakis  from the Department of Informatics and Telecommunications at the National and Kapodistrian University of Athens, was visiting the SDA group on the 21st of September 2017.
Manolis Koubarakis with his research group of Management of Data, Information & Knowledge has been working the last 7 years on managing geospatial data and has contributed to various research projects and applications on this domain. Examples of successful projects include LEO: Linked Earth Observation Data and MELODIES: Maximizing the Exploitation of Linked Open Data in Enterprise and Science, and some of their applications, widely used by the research community, are Strabon (spatiotemporal RDF store)  and SEXTANT (web-based platform for visualizing, exploring and interacting with time-evolving linked geospatial data).

The goal of his visit was to exchange experience and ideas on data management techniques specifically for geospatial data. Apart from presenting various use cases where geospatial tools have helped scientists to get useful insights from scientific data, Prof. Koubarakis shared with our group future research problems and challenges related to this research area. From our side, SDA researchers presented their work on managing Big Data (query processing, analytics, benchmarking, etc.), as well as realated tools, like SANSA – Semantic Analytics Stack and Ontario – Semantic Data Lake.

 
SDA and MADgIK have already been working together since a few years in the context of the EU H2020 projects Big Data Europe and WDAqua and hope to strengthen this collaboration in new projects and joint research activities. The important outcome of this meeting was the plan to organize a common workshop on managing scientific geospatial data in the near future.

SDA at TPDL2017 & a Honorary Mention Award

TPDL2017_logoTPDL 2017: The 21st version of the International Conference on Theory and Practice of Digital Libraries took place in Thessaloniki, Greece from September 18 to 21, 2017.

We as SDA group had four scientific papers accepted and presented:

And we are happy to win the Honorary award for the long paper entitled ‘Exploiting Interlinked Research Metadata’ presented by Sahar Vahdati.

Paper abstract: OpenAIRE, the Open Access Infrastructure for Research in Europe, aggregates metadata about research (projects, publications, people, organizations, etc.) into a central Information Space. OpenAIRE aims at increasing interoperability and reusability of this data collection by exposing it as Linked Open Data (LOD). By following the LOD principles, it is now possible to further increase interoperability and reusability by connecting the OpenAIRE LOD to other datasets about projects, publications, people, and organizations. Doing so required us to identify link discovery tools that perform well, as well as candidate datasets that provide comprehensive scholarly communication metadata, and then to specify linking rules. We demonstrate the added value that interlinking provides for end users by implementing visual frontends for looking up publications to cite, and publication statistics, and evaluating their usability on top of interlinked vs. non-interlinked data

This year at TPDL 2017, three very interesting keynote speeches were given by Paul Groth on Machines are people too,


Elton Barker on Back to the future: annotating, collaborating and linking in a digital ecosystem and Dimitrios Tzovaras on Visualization in the big data era: data mining from networked information.
Thanks to all organizers at TPDL 2017 mainly general chairs:

  • Yannis Manolopoulos, Aristotle University of Thessaloniki, Greece
  • Lazaros Iliadis, Democritus University of Thrace, Greece