Papers Accepted at CIKM 2020

We are very pleased to announce that our group got two papers accepted for presentation at CIKM 2020 (Conference on Information and Knowledge Management CIKM). CIKM seeks to identify challenging problems facing the development of future knowledge and information systems, and to shape future directions of research by soliciting and reviewing high quality, applied and theoretical research findings. An important part of the conference is the Workshops and Tutorial programs which focuses on timely research challenges and initiatives – and bringing together research papers, industry speakers and keynote speakers. The program also showcases posters, demonstrations, competitions, and other special events.

  • Evaluating the Impact of Knowledge Graph Context on Entity Disambiguation Models
    By Isaiah Onando Mulang, Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri,Johannes Hoffart, and Jens Lehmann.
    Abstract Pretrained Transformer models have emerged as state-of-the-art approaches that learn contextual information from the text to improve the performance of several NLP tasks. These models, albeit powerful, still require specialized knowledge in specific scenarios. In this paper, we argue that context derived from a knowledge graph (in our case: Wikidata) provides enough signals to inform pretrained transformer models and improve their performance for named entity disambiguation (NED) on Wikidata KG. We further hypothesize that our proposed KG context can be standardized for Wikipedia, and we evaluate the impact of KG context on the state of the art NED model for the Wikipedia knowledge base. Our empirical results validate that the proposed KG context can be generalized (for Wikipedia), and providing KG context in transformer architectures considerably outperforms the existing baselines, including the vanilla transformer models.
  • MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities
    By Jason Armitage, Endri Kacupaj, Golsa Tahmasebzadeh, Swati,Maria Maleshkova, Ralph Ewerth, and Jens Lehmann.
    Abstract In this paper, we introduce the MLM (Multiple Languages and Modalities) dataset – a new resource to train and evaluate multitask systems on samples in multiple modalities and three languages. The generation process and inclusion of semantic data provide a resource that further tests the ability for multitask systems to learn relationships between entities. The dataset is designed for researchers and developers who build applications that perform multiple tasks on data encountered on the web and in digital archives. A second version of MLM provides a geo-representative subset of the data with weighted samples for countries of the European Union. We demonstrate the value of the resource in developing novel applications in the digital humanities with a motivating use case and specify a benchmark set of tasks to retrieve modalities and locate entities in the dataset. Evaluation of baseline multitask and single task systems on the full and geo-representative versions of MLM demonstrate the challenges of generalising on diverse data. In addition to the digital humanities, we expect the resource to contribute to research in multimodal representation learning, location estimation, and scene understanding.