We are happy to announce that we got two accepted for presentation at WI-IAT 20 (International Joint Conference on Web Intelligence and Intelligent Agent Technology). WI-IAT 20 provides a premier international forum to bring together researchers and practitioners from diverse fields for presentation of original research results, as well as exchange and dissemination of innovative and practical development experiences on Web intelligence and intelligent agent technology research and applications.
Here is the pre-print of the accepted paper with its abstract:
Multilingual Ontology Merging Using Cross-lingual Matching
By Shimaa Ibrahim, Said Fathalla, Jens Lehmann, and Hajira Jabeen.
AbstractWith the growing amount of multilingual data on the Semantic Web, several ontologies (in different natural languages) have been developed to model the same domain. Creating multilingual ontologies by merging such monolingual ones is important to promote semantic interoperability among different ontologies in different natural languages. This is a step towards achieving the multilingual Semantic Web. In this paper, we propose MULON, an approach for merging monolingual ontologies in different natural languages producing a multilingual ontology. MULON approach comprises three modules; Preparation Module, Merging Module, and Assessment Module. We consider both classes and properties in the merging process. We present three real-world use cases describing the usability of the MULON approach in different domains. We assess the quality of the merged ontologies using a set of predefined assessment metrics. MULON has been implemented using Scala and Apache Spark under an open-source license. We have compared our cross-lingual matching results with the results from the Ontology Alignment Evaluation Initiative (OAEI 2019). MULON has achieved relatively high precision, recall, and F-measure in comparison to three state-of-the-art approaches in the matching process and significantly higher coverage without any redundancy in the merging process.
OWLStats: Distributed Computation of OWL Dataset Statistics
By Heba Mohamed, Said Fathalla, Jens Lehmann, and Hajira Jabeen.
AbstractNowadays, ontologies are used in various application areas, involving Artificial Intelligence, Natural Language Processing, Data Integration, and Knowledge Management. It is essential to know the internal structure, distribution, and coherence of the published datasets to make it easier for reuse, interlink, integrate, infer, or query. Therefore, there is a pressing need to obtain a clear view of OWL datasets became more prevalent. In this paper, we present OWLStats, a software component for computing statistical information about large scale OWL datasets in a distributed manner. We present the primary distributed in-memory approach for computing 32 different statistical criteria for OWL datasets utilizing Apache Spark, which can scale horizontally to a cluster of machines. OWLStats has been integrated into the SANSA framework. The preliminary results prove that OWLStats is linearly scalable in terms of data scalability.
We are very pleased to announce that we got three papers accepted for presentation at COLING 2020 (International Conference on Computational Linguistics). The first COLING was held in New York in 1965, with the last iteration in Santa Fe, USA, in 2018. Throughout its history, COLING has brought together researchers from across the field of Computational Linguistics. COLING’2020 continues this tradition and thus welcomes papers on all topics related to both natural language and computation, with the expectation that all papers will include linguistic insight.
Here are the pre-prints of the accepted papers with their abstracts:
- Language Model Transformers as Evaluators for Open-domain Dialogues
By Rostislav Nedelchev, Ricardo Usbeck , and Jens Lehmann.
AbstractComputer-based systems for communication with humans are a cornerstone of AI research since the 1950s. So far, the most effective way to assess the quality of the dialogues produced by these systems is to use resource-intensive manual labor instead of automated means. In this work, we investigate whether language models (LM) based on transformer neural networks can indicate the quality of a conversation. In a general sense, language models are methods that learn to predict one or more words based on an already given context. Due to their unsupervised nature, they are candidates for efficient, automatic indication of dialogue quality. We demonstrate that human evaluators have a positive correlation between the output of the language models and scores. We also provide some insights into their behavior and inner-working in a conversational context.
- Knowledge Graph Embeddings in Geometric Algebras
By Chengjin Xu, Mojtaba Nayyeri, Yung-Yu Chen, and Jens Lehmann.
AbstractKnowledge graph (KG) embedding aims at embedding entities and relations in a KG into a low dimensional latent representation space. Existing KG embedding approaches model entities and relations in a KG by utilizing real-valued , complex-valued, or hypercomplex-valued (Quaternion or Octonion) representations, all of which are subsumed into a geometric algebra. In this work, we introduce a novel geometric algebra-based KG embedding framework, GeomE, which utilizes multivector representations and the geometric product to model entities and relations. Our framework subsumes several state-of-the-art KG embedding approaches and is advantageous with its ability of modeling various key relation patterns, including (anti-)symmetry, inversion and composition, rich expressiveness with higher degree of freedom as well as good generalization capacity. Experimental results on multiple benchmark knowledge graphs show that the proposed approach outperforms existing state-of-the-art models for link prediction.
- TeRo: A Time-aware Knowledge Graph Embedding via Temporal Rotation
By Chengjin Xu, Mojtaba Nayyeri, Fouad Alkhoury, Hamed Shariat Yazdi, and Jens Lehmann.
AbstractIn the last few years, there has been a surge of interest in learning representations of entities and relations in knowledge graph (KG). However, the recent availability of temporal knowledge graphs (TKGs) that contain time information for each fact created the need for reasoning over time in such TKGs. In this regard, we present a new approach of TKG embedding, TeRo, which defines the temporal evolution of entity embedding as a rotation from the initial time to the current time in the complex vector space. Specially, for facts involving time intervals, each relation is represented as a pair of dual complex embeddings to handle the beginning and the end of the relation, respectively. We show our proposed model overcomes the limitations of the existing KG embedding models and TKG embedding models and has the ability of learning and inferring various relation patterns over time. Experimental results on three different TKGs show that TeRo significantly outperforms existing state-of-the-art models for link prediction. In addition, we analyze the effect of time granularity on link prediction over TKGs, which as far as we know has not been investigated in previous literature.
We are very pleased to announce that our paper “Interactive Query Construction in Semantic Question Answering Systems” has been accepted in the Journal of Web Semantics. The Journal of Web Semantics is an interdisciplinary journal based on research and applications of various subject areas that contribute to the development of a knowledge-intensive and intelligent service Web.
Here is the pre-print of the published paper with its abstract:Interactive Query Construction in Semantic Question Answering Systems
By Hamid Zafar, Mohnish Dubey, Jens Lehmann, and Elena Demidova
AbstractSemantic Question Answering (SQA) systems automatically interpret user questions expressed in a natural language in terms of semantic queries. This process involves uncertainty, such that the resulting queries do not always accurately match the user intent, especially for more complex and less common questions. In this article, we aim to empower users in guiding SQA systems towards the intended semantic queries through interaction. We introduce IQA — an interaction scheme for SQA pipelines. This scheme facilitates seamless integration of user feedback in the question answering process and relies on Option Gain — a novel metric that enables efficient and intuitive user interaction. Our evaluation shows that using the proposed scheme, even a small number of user interactions can lead to significant improvements in the performance of SQA systems.
We are happy to announce that we got a paper accepted for presentation at iiWAS2020 (Information Integration and Web-based Applications & Services). iiWAS2020 is a leading international conference for researchers and industry practitioners to share their new ideas, original research results and practical development experiences from all information integration and web-based applications & services related areas.
- Towards an On-tology Representing Characteristics of Inflammatory Bowel Disease
By Abderrahmane Khiat, Mirette Elias, Ann Christina Foldenauer, Michaela Koehm, Irina Blumenstein, and Giulio Napolitano.
AbstractInflammatory bowel disease (IBD), including Crohn’s Disease (CD) and Ulcerative Colitis (UC), is a chronic disease characterized by numerous, hard to predict periods of relapse and remission. “Dig-ital twin” approaches, leveraging personalized predictive models, would significantly enhance therapeutic decision-making and cost-effectiveness. However, the associated computational and statistical methods require high quality data from a large population of patients. Such a comprehensive repository is very challenging to build, though, and none is available for IBD. To compensate the scarcity of data, a promising approach is to employ a knowledge graph, which is built from the available data and would help predicting IBD episodes and delivering more relevant personalized therapy at the lowest cost. In this research in progress, we present a knowledge graph developed on the basis of patient data collected in the University Hospital Frankfurt. First, we designed Chronisch-entzündliche Darmerkrankungen (CED) ontology that encompasses the vocabulary , specifications and characteristics associated by physicians with IBD patients, such as disease classification schemas (e.g. Montreal Classification of inflammatory bowel disease ), status of the disease activity, past and current medications. Next, we defined the mappings between ontology entities and database variables. Physicians participating in the Fraunhofer MED 2 ICIN project, together with the project members, validated the ontology and the knowledge graph. Furthermore, the knowledge graph has been validated against the competency questions compiled by physicians.
We are very pleased to announce that we got a paper accepted for presentation at WISE 2020 (International Conference on Web Information Systems Engineering). WISE has established itself as a community aiming at high quality research and offering the ground for advancing efforts in topics related to Web information systems. WISE 2020 will be an international forum for researchers, professionals, and industrial practitioners to share their knowledge and insights in the rapidly growing areas of Web technologies for Big Data and Artificial Intelligence (AI), two highly important areas for the world economy.
- Encoding Knowledge Graph Entity Aliases in Attentive Neural Network for Wikidata Entity Linking
By Isaiah Onando Mulang, Kuldeep Singh, Akhilesh Vyas, Saeedeh Shekarpour, Akhilesh Vyas, Maria Esther Vidal, Jens Lehmann, and Sören Auer.
AbstractThe collaborative knowledge graphs such as Wikidata excessively rely on the crowd to author the information. Since the crowd is not bound to a standard protocol for assigning entity titles, the knowledge graph is populated by non-standard, noisy, long or even sometimes awkward titles. The issue of long, implicit, and nonstandard entity representations is a challenge in Entity Linking (EL) approaches for gaining high precision and recall. Underlying KG in general is the source of target entities for EL approaches, however, it often contains other relevant information, such as aliases of entities (e.g., Obama and Barack Hussein Obama are aliases for the entity Barack Obama). EL models usually ignore such readily available entity attributes. In this paper, we examine the role of knowledge graph context on an attentive neural network approach for entity linking on Wikidata. Our approach contributes by exploiting the sufficient context from a KG as a source of background knowledge, which is then fed into the neural network. This approach demonstrates merit to address challenges associated with entity titles (multi-word, long, implicit, case-sensitive). Our experimental study shows approx 8% improvements over the baseline approach, and significantly outperforms an end to end approach for Wikidata entity linking.
We are very pleased to announce that we got two papers accepted for presentatioon at IDEAL 2020 (International Conference on Intelligent Data Engineering and Automated Learning). IDEAL is an annual international conference dedicated to emerging and challenging topics in intelligent data analysis, data mining and their associated learning systems and paradigms. The conference provides a unique opportunity and stimulating forum for presenting and discussing the latest theoretical advances and real-world applications in Computational Intelligence and Intelligent Data Analysis.
Here are the pre-prints of the accepted papers with its abstract:
- Meta-Hyperband: Hyperparameter optimization with meta-learning and coarse-to-fine
By Samin Payrosangari, Afshin Sadeghi, Damien Graux, and Jens Lehmann.
AbstractHyperparameter optimization is one of the main pillars of machine learning approaches. In this paper, we introduce Meta-Hyperband: a Hyperband based algorithm that improves the search by adding levels of exploitation. Unlike Hyperband which is a pure exploration bandit-based approach for hyperparameter optimization, our meta approach generates a trade-off between exploration and exploitation, combining Hyperband with meta-learning and Coarse-to-Fine modules. We analyze the performance of Meta-Hyperband on various datasets to tune the hyperparameters of CNN and SVM. The experiments indicate that in many cases Meta-Hyperband can discover hyperparameter configurations with higher quality than Hyperband, using similar amounts of resources. In particular, we discovered a CNN configuration for classifying CIFAR10 dataset which has a 3% higher performance than the configuration founded by Hyperband, which is also 0.3% more accurate than the best-reported configuration of the Bayesian optimization approach. Additionally, we release a publicly available pool of historically well-performed configurations on several datasets for CNN and SVM to ease the adoption of Meta-Hyperband.
- International Data Spaces Information Model – An Ontology for Sovereign Exchange of Digital Content
By Sebastian Bader, Jaroslav Pullmann, Christian Mader, Sebastian Tramp, Christoph Quix, Andreas Mueller, Haydar Akyürek, Matthias Böckmann, Andreas Mueller, Benedikt Imbusch, Johannes Lipp, Sandra Geisler, and Christoph Lange.
AbstractThe International Data Spaces initiative (IDS) is building an ecosystem to facilitate data exchange in a secure, trusted, and semantically interoperable way. It aims at providing a basis for smart services and cross-company business processes, while at the same time guaranteeing data owners’ sovereignty over their content. The IDS Information Model is an RDFS/OWL ontology defining the fundamental concepts for describing actors in a data space, their interactions, the resources exchanged by them, and data usage restrictions. After introducing the conceptual model and design of the ontology, we explain its implementation on top of standard ontologies as well as the process for its continuous evolution and quality assurance involving a community driven by industry and research organisations. We demonstrate tools that support generation, validation, and usage of instances of the ontology with the focus on data control and protection in a federated ecosystem.
Last week, on Tuesday 29th of September 2020 successfully defended my PhD thesis entitled “Efficient Distributed In-Memory Processing of RDF Datasets”. The main objective of this thesis is to lay foundations for efficient algorithms performing analytics, i.e. exploration, quality assessment, and querying over semantic knowledge graphs at a scale that has not been possible before.
Congratulations to @Gezim_Sejdiu for successfully completing his PhD on distributed in-memory processing of RDF datasets at @SDA_Research! Gezim made very significant contributions to the @SANSA_Stack and worked on processing large-scale #KnowledgeGraphs. pic.twitter.com/DVSUkHZIRU
— Jens Lehmann (@JLehmann82) September 29, 2020
See below the thesis abstract with references to the main papers, part of the work is based on (see here: https://gezimsejdiu.github.io//publications/ for the complete list of publications).
Over the past decade, vast amounts of machine-readable structured information have become available through the automation of research processes as well as the increasing popularity of knowledge graphs and semantic technologies. Today, we count more than 10,000 datasets made available online following Semantic Web standards. A major and yet unsolved challenge that research faces today is to perform scalable analysis of large-scale knowledge graphs in order to facilitate applications in various domains including life sciences, publishing, and the internet of things.
The main objective of this thesis is to lay foundations for efficient algorithms performing analytics, i.e. exploration, quality assessment, and querying over semantic knowledge graphs at a scale that has not been possible before.
First, we propose a novel approach for statistical calculations of large RDF datasets , which scales out to clusters of machines.
In particular, we describe the first distributed in-memory approach for computing 32 different statistical criteria for RDF datasets using Apache Spark.
Many applications such as data integration, search, and interlinking, may take full advantage of the data when having a priori statistical information about its internal structure and coverage. However, such applications may suffer from low quality and not being able to leverage the full advantage of the data when the size of data goes beyond the capacity of the resources available.
Thus, we introduce a distributed approach of quality assessment of large RDF datasets . It is the first distributed, in-memory approach for computing different quality metrics for large RDF datasets using Apache Spark. We also provide a quality assessment pattern that can be used to generate new scalable metrics that can be applied to big data.
Based on the knowledge of the internal statistics of a dataset and its quality, users typically want to query and retrieve large amounts of information.
As a result, it has become difficult to efficiently process these large RDF datasets.
Indeed, these processes require, both efficient storage strategies and query-processing engines, to be able to scale in terms of data size.
Therefore, we propose a scalable approach [3, 4] to evaluate SPARQL queries over distributed RDF datasets by translating SPARQL queries into Spark executable code.
We conducted several empirical evaluations to assess the scalability, effectiveness, and efficiency of our proposed approaches.
More importantly, various use cases i.e. Ethereum analysis, Mining Big Data Logs, and Scalable Integration of POIs, have been developed and leverages by our approach.
The empirical evaluations and concrete applications provide evidence that our methodology and techniques proposed during this thesis help to effectively analyze and process large-scale RDF datasets.
All the proposed approaches during this thesis are integrated into the larger SANSA framework .
. Gezim Sejdiu; Ivan Ermilov; Jens Lehmann; and Mohamed Nadjib-Mami, “DistLODStats: Distributed Computation of RDF Dataset Statistics,” in Proceedings of 17th International Semantic Web Conference (ISWC), 2018.
. Gezim Sejdiu; Anisa Rula; Jens Lehmann; and Hajira Jabeen, “A Scalable Framework for Quality Assessment of RDF Datasets,” in Proceedings of 18th International Semantic Web Conference (ISWC), 2019.
. Claus Stadler; Gezim Sejdiu; Damien Graux; and Jens Lehmann, “Sparklify: A Scalable Software Component for Efficient evaluation of SPARQL queries over distributed RDF datasets,” in Proceedings of 18th International Semantic Web Conference (ISWC), 2019.
. Gezim Sejdiu; Damien Graux; Imran Khan; Ioanna Lytra; Hajira Jabeen; and Jens Lehmann, “Towards A Scalable Semantic-based Distributed Approach for SPARQL query evaluation,” 15th International Conference on Semantic Systems (SEMANTiCS), Research & Innovation, 2019.
. Jens Lehmann; Gezim Sejdiu; Lorenz Bühmann; Patrick Westphal; Claus Stadler; Ivan Ermilov; Simon Bin; Nilesh Chakraborty; Muhammad Saleem; Axel-Cyrille Ngomo Ngonga; and Hajira Jabeen, “Distributed Semantic Analytics using the SANSA Stack,”; in Proceedings of 16th International Semantic Web Conference – Resources Track (ISWC’2017), 2017.
We are very pleased to announce that our paper “Message Passing for Hyper-Relational Knowledge Graphs” was accepted for presentation at EMNLP2020 (Empirical Methods in Natural Language Processing).
EMNLP is a leading conference in the area of Natural Language Processing. EMNLP invites the submission of long and short papers on substantial, original, and unpublished research in empirical methods for Natural Language Processing.
Here is the pre-print of the accepted paper with its abstract:
- Message Passing for Hyper-Relational Knowledge Graphs
By Mikhail Galkin, Priyansh Trivedi, Gaurav Maheshwari, Ricardo Usbeck, and Jens Lehmann.
AbstractHyper-relational knowledge graphs (KGs) (e.g., Wikidata) enable associating additional key-value pairs along with the main triple to disambiguate, or restrict the validity of a fact. In this work, we propose a message passing based graph encoder – StarE capable of modeling such hyper-relational KGs. Unlike existing approaches, StarE can encode an arbitrary number of additional information (qualifiers) along with the main triple while keeping the semantic roles of qualifiers and triples intact. We also demonstrate that existing benchmarks for evaluating link prediction (LP) performance on hyper-relational KGs suffer from fundamental flaws and thus develop a new Wikidata-based dataset – WD50K. Our experiments demonstrate that StarE based LP model outperforms existing approaches across multiple benchmarks. We also confirm that leveraging qualifiers is vital for link prediction with gains up to 25 MRR points compared to triple-based representations.
We are very pleased to announce that our group got four papers accepted for presentation at ISWC2020 (International Semantic Web Conference). ISWC is the premier international forum, for the Semantic Web / Linked Data Community, and will bring together researchers, practitioners and industry specialists to discuss, advance, and shape the future of semantic technologies.
Here are the pre-print of the accepted papers with their abstract:
- Fantastic Knowledge Graph Embeddings and How to Find the Right Space for Them
By Mojtaba Nayyeri, Chengjin Xu, Sahar Vahdati, Nadezhda Vassilyeva, Emanuel Sallinger, Hamed Shariat Yazdi , and Jens Lehmann.
AbstractDuring the last few years, several knowledge graph embedding models have been devised in order to handle machine learning problems for knowledge graphs. Some of the models which are proven to be capable of inferring relational patterns, such as symmetry or transitivity, show lower performance in practice than expected by looking at their theoretical power. It is often unknown what factors contribute to such performance differences among KGE models in the inference of particular patterns. We develop the concept of a solution space as a factor that has a direct influence on the practical performance of knowledge graph embedding models as well as their capability to infer relational patterns. We showcase the effect of solution space on a newly proposed model dubbed SpacE^ss. We prove the theoretical characteristics of this method and evaluate it in practice against state-of-the-art models on a set of standard benchmarks such as WordNet and FreeBase.
- Temporal Knowledge Graph Embedding Model based on Additive Time Series Decomposition
By Chengjin Xu, Mojtaba Nayyeri, Fouad Alkhoury, Hamed Shariat Yazdi , and Jens Lehmann.
AbstractKnowledge Graph (KG) embedding has attracted more attention in recent years. Most KG embedding models learn from time-unaware triples. However, the inclusion of temporal information besides triples would further improve the performance of a KGE model. In this regard, we propose ATiSE, a temporal KG embedding model which incorporates time information into entity/relation representations by using additive time series decomposition. Moreover, considering the temporal uncertainty during the evolution of entity/relation representations over time, we map the representations of temporal KGs into the space of multi-dimensional Gaussian distributions. The mean of each entity/relation embedding at a time step shows the current expected position, whereas its covariance (which is temporally stationary) represents its temporal uncertainty. Experimental results show that ATiSE remarkably outperforms the state-of-the-art KGE models and the existing temporal KGE models on link prediction over four temporal KGs
- PNEL: Pointer Network based End-To-End Entity Linking over Knowledge Graphs
By Debayan Banerjee, Debanjan Chaudhuri, Mohnish Dubey, and Jens Lehmann.
AbstractQuestion Answering systems are generally modelled as a pipeline consisting of a sequence of steps. In such a pipeline, Entity Linking (EL) is often the first step. Several EL models first perform span detection and then entity disambiguation. In such models errors from the span detection phase cascade to later steps and result in a drop of overall accuracy. Moreover, lack of gold entity spans in training data is a limiting factor for span detector training. Hence the movement towards end-to-end EL models began where no separate span detection step is involved. In this work we present a novel approach to end-to-end EL by applying the popular Pointer Network model. It achieves competitive performance while maintaining low response times. We demonstrate this in our evaluation over three datasets on the Wikidata Knowledge Graph.
- CASQAD: A New Dataset For Context-aware Spatial Question Answering
By Jewgeni Rose, and Jens Lehmann.
AbstractThe task of factoid question answering (QA) faces new challenges when applied in scenarios with rapidly changing context information, for example on smartphones. Instead of asking who the architect of the “Holocaust Memorial” in Berlin was, the same question could be phrased as “Who was the architect of the many stelae in front of me?” presuming the user is standing in front of it. While traditional QA systems rely on static information from knowledge bases and the analysis of named entities and predicates in the input, question answering for temporal and spatial questions imposes new challenges to the underlying methods. To tackle these challenges, we present the Context-aware Spatial QA Dataset (CASQAD) with over 5,000 annotated questions containing visual and spatial references that require information about the user’s location and moving direction to compose a suitable query. These questions were collected in a large scale user study and annotated semi-automatically, with appropriate measures to ensure the quality.
We are very pleased to announce that our group got two papers accepted for presentation at CIKM 2020 (Conference on Information and Knowledge Management CIKM). CIKM seeks to identify challenging problems facing the development of future knowledge and information systems, and to shape future directions of research by soliciting and reviewing high quality, applied and theoretical research findings. An important part of the conference is the Workshops and Tutorial programs which focuses on timely research challenges and initiatives – and bringing together research papers, industry speakers and keynote speakers. The program also showcases posters, demonstrations, competitions, and other special events.
Evaluating the Impact of Knowledge Graph Context on Entity Disambiguation Models
By Isaiah Onando Mulang, Kuldeep Singh, Chaitali Prabhu, Abhishek Nadgeri,Johannes Hoffart, and Jens Lehmann.
AbstractPretrained Transformer models have emerged as state-of-the-art approaches that learn contextual information from the text to improve the performance of several NLP tasks. These models, albeit powerful, still require specialized knowledge in specific scenarios. In this paper, we argue that context derived from a knowledge graph (in our case: Wikidata) provides enough signals to inform pretrained transformer models and improve their performance for named entity disambiguation (NED) on Wikidata KG. We further hypothesize that our proposed KG context can be standardized for Wikipedia, and we evaluate the impact of KG context on the state of the art NED model for the Wikipedia knowledge base. Our empirical results validate that the proposed KG context can be generalized (for Wikipedia), and providing KG context in transformer architectures considerably outperforms the existing baselines, including the vanilla transformer models.
MLM: A Benchmark Dataset for Multitask Learning with Multiple Languages and Modalities
By Jason Armitage, Endri Kacupaj, Golsa Tahmasebzadeh, Swati,Maria Maleshkova, Ralph Ewerth, and Jens Lehmann.
AbstractIn this paper, we introduce the MLM (Multiple Languages and Modalities) dataset – a new resource to train and evaluate multitask systems on samples in multiple modalities and three languages. The generation process and inclusion of semantic data provide a resource that further tests the ability for multitask systems to learn relationships between entities. The dataset is designed for researchers and developers who build applications that perform multiple tasks on data encountered on the web and in digital archives. A second version of MLM provides a geo-representative subset of the data with weighted samples for countries of the European Union. We demonstrate the value of the resource in developing novel applications in the digital humanities with a motivating use case and specify a benchmark set of tasks to retrieve modalities and locate entities in the dataset. Evaluation of baseline multitask and single task systems on the full and geo-representative versions of MLM demonstrate the challenges of generalising on diverse data. In addition to the digital humanities, we expect the resource to contribute to research in multimodal representation learning, location estimation, and scene understanding.