Paper Accepted at LREC 2020

We are very pleased to announce that our group got a paper accepted for presentation at LREC 2020 (International Conference on Language Resources and Evaluation). LREC has become the major event on Language Resources (LRs) and Evaluation for Language Technologies (LT). The aim of LREC is to provide an overview of the state-of-the-art, explore new R&D directions and emerging trends, exchange information regarding LRs and their applications, evaluation methodologies and tools, ongoing and planned activities, industrial uses and needs, requirements coming from the e-society, both with respect to policy issues and to technological and organisational ones.

Here is the pre-print of the accepted paper with its abstract:

Treating Dialogue Quality Evaluation as an Anomaly Detection Problem
By Rostislav Nedelchev, Jens Lehmann and Ricardo Usbeck.
Abstract Dialogue systems for interaction with humans have been enjoying increased popularity in the research and industry fields. To this day, the best way to estimate their success is through means of human evaluation and not automated approaches, despite the abundance of work done in the field. In this paper, we investigate the effectiveness of perceiving dialogue evaluation as an anomaly detection task. The paper looks into four dialogue modeling approaches and how their objective functions correlate with human annotation scores. A high-level perspective exhibits negative results. However, a more in-depth look shows limited potential for using anomaly detection for evaluating dialogues.

Papers Accepted at ESWC

We are very pleased to announce that our group got five papers (three papers in the main tracks and two in the Cleopatra Workshop) accepted for presentation at ESWC2020 (European Semantic Web Conference 2020). The ESWC is a major venue for discussing the latest scientific results and technology innovations around semantic technologies. Building on its past success, ESWC is seeking to broaden its focus to span other relevant related research areas in which Web semantics plays an important role.

Here are the pre-prints of the papers with their abstracts that have been accepted in the main tracks:

  • A Knowledge Graph for Industry 4.0
    By Sebastian R. Bader, Irlan Grangel-Gonzalez, Priyanka Nanjappa, Maria-Esther Vidal, and Maria Maleshkova.
    Abstract One of the most crucial tasks for today’s knowledge workers is to get and retain a thorough overview on the latest state of the art. Especially in dynamic and evolving domains, the amount of relevant sources is constantly increasing, updating and overruling previous methods and approaches. For instance, the digital transformation of manufacturing systems, called Industry 4.0, currently faces an overwhelming amount of standardization efforts and reference initiatives, resulting in a sophisticated information environment.We propose a structured dataset in the form of a semantically annotated knowledge graph for Industry 4.0 related standards, norms and reference frameworks. The graph provides a Linked Data-conform collection of annotated, classified reference guidelines supporting newcomers and experts alike in understanding how to implement Industry 4.0 systems. We illustrate the suitability of the graph for various use cases, its already existing applications, present the maintenance process and evaluate its quality.

  • VQuAnDa: Verbalization QUestion Answering DAtaset
    By Endri Kacupaj, Hamid Zafar, Jens Lehmann and Maria Maleshkova.
    AbstractQuestion Answering (QA) systems over Knowledge Graphs (KGs) aim to provide a concise answer to a given natural language question. Despite the significant evolution of QA methods over the past years, there are still some core lines of work, which are lagging behind. This is especially true for methods and datasets that support the verbalization of answers in natural language. Specifically, to the best of our knowledge, none of the existing Question Answering datasets provide any verbalization data for the question-query pairs. Hence, we aim to fill this gap by providing the first QA dataset VQuAnDa that includes the verbalization of each answer. We base VQuAnDa on a commonly used large-scale QA dataset — LC-QuAD, in order to support compatibility and continuity of previous work. We complement the dataset with baseline scores for measuring future training and evaluation work, by using a set of standard sequence to sequence models and sharing the results of the experiments. This resource empowers researchers to train and evaluate a variety of models to generate answer verbalizations.

  • Embedding-based Recommendations on Scholarly Knowledge Graphs
    By Mojtaba Nayyeri, Sahar Vahdati, Xiaotian Zhou, Hamed Shariat Yazdi, and Jens Lehmann.
    Abstract The increasing availability of scholarly metadata in the form of Knowledge Graphs (KG) offers opportunities for studying the structure of scholarly communication and evolution of science.  Such KGs build the foundation for knowledge-driven tasks e.g., link discovery, prediction and entity classification which allow to provide recommendation services. knowledge graph embedding (KGE) models have been investigated for such knowledge-driven tasks in different application domains. One of the applications of KGE models is to provide link predictions, which can also be viewed as a foundation for recommendation service, e.g.~high confidence “co-author” links in a scholarly knowledge graph can be seen as suggested collaborations. In this paper, KGEs are reconciled with a specific loss function (Soft Margin) and examined with respect to their performance for co-authorship link prediction task on scholarly KGs.  The results show a significant improvement in the accuracy of the experimented KGE models on the considered scholarly KGs using this specific loss.TransE with Soft Margin (TransE-SM) obtains a score of 79.5% Hits@10 for co-authorship link prediction task while the original TransE obtains 77.2%, on the same task. In terms of accuracy and Hits@10, TransE-SM also outperforms other state-of-the-art embedding models such as ComplEx, ConvE and RotatE in this setting.The predicted co-authorship links have been validated by evaluating the profile of scholars.

Here are the pre-prints of the papers with their abstracts that have been accepted in the Cleopatra Workshop:

  • Training Multimodal Systems for Classification with Multiple Objectives
    By Jason Armitage, Shramana Thakur, Rishi Tripathi, Jens Lehmann and Maria Maleshkova.
    AbstractWe learn about the world from a diverse range of sensory information. Automated systems lack this ability and are confined to processing information presented in only a single format. Adapting architectures to learn from multiple modalities creates the potential to learn rich representations – but current systems only deliver marginal improvements on unimodal approaches. Neural networks learn sampling noise during training with the result that performance on unseen data is degraded. This research introduces a second objective over the multimodal fusion process learned with variational inference. Regularisation methods are implemented in the inner training loop to control variance and the modular structure stabilises performance as additional neurons are added to layers. This framework is evaluated on a multilabel classification task with textual and visual inputs to demonstrate the potential for multiple objectives and probabilistic methods to lower variance and improve generalisation.

Paper Accepted at IFIP SEC

We are very pleased to announce that our group got a paper accepted for presentation at the International Information Security and Privacy Conference (IFIP SEC). The IFIP SEC conferences aim to bring together primarily researchers, but also practitioners from academia, industry and governmental institutions to elaborate and discuss IT Security and Privacy Challenges that we are facing today and will be facing in the future.

Here is the pre-print of the accepted paper with its abstract:

Establishing a Strong Baseline for Privacy Policy Classification” by Najmeh Mousavi Nejad, Pablo Jabat, Rostislav Nedelchev, Simon Scerri, Damien Graux.
Abstract Digital service users are routinely exposed to Privacy Policy consent forms, through which they enter contractual agreements consenting to the specifics of how their personal data is managed and used. Nevertheless, despite  renewed importance following legislation such as the European GDPR, a majority of people still ignore policies due to their length and complexity. To counteract this potentially dangerous reality, in this paper we present three different models that are able to assign pre-defined categories to privacy policy paragraphs, using supervised machine learning. In order to train our neural networks, we exploit a dataset containing 115 privacy policies defined by US companies. An evaluation shows that our approach outperforms state-of-the-art by 5% over comparable and previously-reported F1 values. In addition, our method is completely reproducible since we provide open access to all resources. Given these two contributions, our approach can be considered as a strong baseline for privacy policy classification.

Paper Accepted at ACM SAC

We are very pleased to announce that our group got a paper accepted for presentation at ACM SAC. The ACM Symposium on Applied Computing (SAC) has been a primary and international forum for applied computer scientists, computer engineers and application developers to gather, interact and present their work.

Here is the pre-print of the accepted paper with its abstract:

Towards the Semantic Formalization of Science” by Said Fathalla, Soren Auer, Christoph Lange.
Abstract:The past decades have witnessed a huge growth in scholarly information published on the Web, mostly in unstructured or semi-structured formats, which hampers scientific literature exploration and scientometric studies. Past studies on ontologies for structuring scholarly information focused on describing scholarly articles’ components, such as document structure, metadata and bibliographies, rather than the scientific work itself. Over the past four years, we have been developing the Science Knowledge Graph Ontologies (SKGO), a set of ontologies for modeling the research findings in various fields of modern science resulting in a knowledge graph. Here, we introduce this ontology suite and discuss the design considerations taken into account during its development. We deem that within the next few years, a science knowledge graph is likely to become a crucial component for organizing and exploring scientific work.

Papers Accepted at ECAI

We are very pleased to announce that our group got two papers accepted for presentation at ECAI2020 (European Conference on Artificial Intelligence), Europe’s premier AI Research venue. Under the motto “Paving the way towards Human-Centric AI” ECAI provides an opportunity for researchers to present and discuss about the best AI research, developments, applications and results.

Here are the pre-prints of the accepted papers with their abstracts:

  • Distantly Supervised Question Parsing” by Hamid Zafar, Maryam Tavakol, Jens Lehmann.
    Abstract: The emergence of structured databases for Question Answering (QA) systems has led to developing methods, in which the problem of learning the correct answer efficiently is based on a linking task between the constituents of the question and the corresponding entries in the database. As a result, parsing the questions in order to determine their main elements, which are required for answer retrieval, becomes crucial. However, most datasets for question answering systems lack gold annotations for parsing, i.e., labels are only available in the form of (question, formal-query, answer). In this paper, we propose a distantly supervised learning framework based on reinforcement learning to learn the mentions of entities and relations in questions. We leverage the provided formal queries to characterize delayed rewards for optimizing a policy gradient objective for the parsing model. An empirical evaluation of our approach shows a significant improvement in the performance of entity and relation linking compared to the state of the art. We also demonstrate that a more accurate parsing component enhances the overall performance of QA systems.
  • “MDE: Multiple Distance Embeddings for Link Prediction in Knowledge Graphs by Afshin Sadeghi, Damien Graux, Hamed Shariat Yazdi, and Jens Lehmann. 
    Abstract: Over the past decade, knowledge graphs became popular for capturing structureddomain knowledge. Relational learning models enable the prediction of miss-ing links inside knowledge graphs. More specifically, latent distance approachesmodel the relationships among entities via a distance between latent representa-tions. Translating embedding models (e.g., TransE) are among the most popularlatent distance approaches which use one distance functionto learn multiple re-lation patterns. However, they are not capable of capturingsymmetric relations.They also force relations with reflexive patterns to become symmetric and tran-sitive. In order to improve distance based embedding, we propose multi-distanceembeddings (MDE). Our solution is based on the idea that by learning indepen-dent embedding vectors for each entity and relation one can aggregate contrastingdistance functions. Benefiting from MDE, we also develop supplementary dis-tances resolving the above-mentioned limitations of TransE. We further proposean extended loss function for distance based embeddings andshow that MDE andTransE are fully expressive using this loss function. Furthermore, we obtain abound on the size of their embeddings for full expressivity.Our empirical resultsshow that MDE significantly improves the translating embeddings and outper-forms several state-of-the-art embedding models on benchmark datasets.



Quantum Natural Language Processing (QNLP) in Oxford

From 5 to 6 December, a conference on QNLP took place at St. Aldate’s Church in Oxford. This event was organized by the Quantum Group at the Department of Computer Science of the University of Oxford with support from the companies Cambridge Quantum Computing (CQC) and IBM. The two members Cedric Möller and Daniel Steinigen of the SDA team in Dresden participated in the conference. This was also the first conference ever about this combination of NLP with quantum computing.

Quantum Artificial Intelligence (QAI) has become increasingly interesting for research activities in the recent years. Noisy intermediate-scale quantum (NISQ) computers already provide the ability to perform algorithms and to find possible advantages for NLP. Since mathematical foundations of quantum theory are very similar to those of compositional NLP with applied category theory, quantum computers should provide a natural setting for compositional NLP tasks [1].

[1] Zeng, Coecke – Quantum Algorithms for Compositional Natural Language Processing https://arxiv.org/abs/1608.01406