passbild_lehmannFull Professor
Computer Science Institute
University of Bonn

Lead Scientist
Knowledge Discovery Department
Fraunhofer IAIS

Leader of the Machine Learning and Ontology Engineering Group,
AKSW Center, Institute for Applied Informatics, University of Leipzig

Profiles: LinkedIn, Google Scholar, DBLP, RDFWeb-ID

email_address_bonn

Offices

Room A109
Römerstr. 164, 53117 Bonn, Germany
University of Bonn, Computer Science
Tel:+49 228 73-4315

Room C1-227
Schloss Birlinghoven, 53757 Sankt Augustin, Germany
Fraunhofer IAIS

Room P631
Augustusplatz 10, 04109 Leipzig, Germany
Institute for Applied Informatics (InfAI) at the University of Leipzig

Short CV


Prof. Dr. Jens Lehmann (http://www.jens-lehmann.org) is a Full Professor at the University of Bonn and a researcher at the University of Leipzig where he is co-leading the AKSW group with 40 researchers. He obtained a PhD with grade summa cum laude at the University of Leipzig in 2010 and a master degree in Computer Science from Technical University of Dresden in 2006. His research interests involve Semantic Web, machine learning and knowledge representation. He is founder, leader or contributor of several open source projects, including DL-Learner, DBpedia, LinkedGeoData and ORE. He works/worked in several funded projects, e.g. GeoKnow (EU STREP, coordinator), LOD2 (EU IP, work package lead), LATC (EU STREP, lead for University of Leipzig) and SoftWiki (BmBF). Prof. Dr. Jens Lehmann authored more than 70 articles in international journals and conferences cited more than 9000 times according to Google Scholar.

 

Research Interests


  • Machine Learning
  • Semantic Web
  • Big Data
  • Logics

Publications


2017

  • W. Maroy, A. Dimou, D. Kontokostas, B. D. Meester, J. Lehmann, E. Mannens, and S. Hellmann, “Sustainable Linked Data Generation: The Case of DBpedia,” in Proceedings of 16th International Semantic Web Conference, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{maroy-2017-dbpedia-rml-iswc,
    Title = {Sustainable Linked Data Generation: The Case of DBpedia},
    Author = {Maroy, Wouter and Dimou, Anastasia and Kontokostas, Dimitris and Meester, Ben De and Lehmann, Jens and Mannens, Erik and Hellmann, Sebastian},
    Booktitle = {Proceedings of 16th International Semantic Web Conference},
    Year = {2017},
    Added-at = {2017-08-31T16:25:26.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/21d86a6ac50399a52d90e3a8d1ec5cb6a/aksw},
    Interhash = {2e65dd03c3587df7d2edc26c4fb2b054},
    Intrahash = {1d86a6ac50399a52d90e3a8d1ec5cb6a},
    Keywords = {2017 group_aksw hellmann kontokostas lehmann mole},
    Timestamp = {2017-08-31T16:25:26.000+0200},
    Url = {http://jens-lehmann.org/files/2017/iswc_dbpedia_rml.pdf}
    }

  • E. Marx, S. Shekarpour, T. Soru, A. B. M. P. s}, M. Saleem, C. Baron, A. Weichselbraun, J. Lehmann, A. N. Ngomo, and S. Auer, “Torpedo: Improving the State-of-the-Art RDF~Dataset~Slicing,” in 11th IEEE International Conference on Semantic Computing, Jan 30-Feb 1, 2017, San Diego, California, USA, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{ICSC/2017/SLICE/Marx,
    Title = {{Torpedo}: {I}mproving the {S}tate-of-the-{A}rt {RDF}~{D}ataset~{S}licing},
    Author = {Edgard Marx and Saeedeh Shekarpour and Tommaso Soru and Adrian M.P. Bra{\c{s}}oveanu and Muhammad Saleem and Ciro Baron and Albert Weichselbraun and Jens Lehmann and Axel-Cyrille Ngonga Ngomo and S\"oren Auer},
    Booktitle = {11th IEEE International Conference on Semantic Computing, Jan 30-Feb 1, 2017, San Diego, California, USA},
    Year = {2017},
    Keywords = {marx soru ngonga auer lehmann aksw mole saleem simba rdfslice baron group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page 2017 event_ICSC},
    Owner = {marx},
    Url = {https://svn.aksw.org/papers/2017/Torpedo_ICSC/public.pdf}
    }

  • G. Maheshwari, M. Dubey, P. Trivedi, and J. Lehmann, “How to Revert Question Answering on Knowledge Graphs,” in Proceedings of 16th International Semantic Web Conference – Poster & Demos, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{maheshwari-2017-sansa-iswc-demo,
    Title = {How to Revert Question Answering on Knowledge Graphs},
    Author = {Maheshwari, Gaurav and Dubey, Mohnish and Trivedi, Priyansh and Lehmann, Jens},
    Booktitle = {Proceedings of 16th International Semantic Web Conference - Poster \& Demos},
    Year = {2017},
    Added-at = {2017-08-31T16:25:26.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/2c2bcdab20424d0c8e166943e1cd2a8af/aksw},
    Interhash = {026ef0b6b83de1a5ebc9bceef028e4b4},
    Intrahash = {c2bcdab20424d0c8e166943e1cd2a8af},
    Keywords = {2017 group_aksw lehmann mole},
    Timestamp = {2017-08-31T16:25:26.000+0200},
    Url = {http://jens-lehmann.org/files/2017/iswc_pd_lcquad.pdf}
    }

  • D. Lukovnikov, A. Fischer, S. Auer, and J. Lehmann, “Neural Network-based Question Answering over Knowledge Graphs on Word and Character Level,” in Proceedings of the 26th international conference on World Wide Web, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{lukovnikov2017www,
    Title = {Neural Network-based Question Answering over Knowledge Graphs on Word and Character Level},
    Author = {Lukovnikov, Denis and Fischer, Asja and Auer, Soeren and Lehmann, Jens},
    Booktitle = {Proceedings of the 26th international conference on World Wide Web},
    Year = {2017},
    Keywords = {2017 group_aksw sys:relevantFor:infai boa sys:relevantFor:bis lehmann MOLE},
    Url = {http://jens-lehmann.org/files/2017/www_nn_factoid_qa.pdf}
    }

  • J. Lehmann, G. Sejdiu, L. Bühmann, P. Westphal, C. Stadler, I. Ermilov, S. Bin, N. Chakraborty, M. Saleem, A. N. Ngonga, and H. Jabeen, “Distributed Semantic Analytics using the SANSA Stack,” in Proceedings of 16th International Semantic Web Conference – Resources Track (ISWC’2017), 2017.
    [BibTeX] [Abstract] [Download PDF]
    Over the past decade, vast amounts of machine-readable structured information have become available through the automation of research processes as well as the increasing popularity of knowledge graphs and semantic technologies. A major research challenge today is to perform scalable analysis of large-scale knowledge graphs to facilitate applications like link prediction, knowledge base completion and question answering. Most analytics approaches, which scale horizontally (i.e., can be executed in a distributed environment) work on simple feature-vector-based input rather than more expressive knowledge structures. On the other hand, analytics methods which exploit expressive structures usually do not scale well to very large knowledge bases. This software framework paper describes the ongoing project Semantic Analytics Stack (SANSA) which supports expressive and scalable semantic analytics by providing functionality for distributed in-memory computing for RDF data. The library provides APIs for RDF storage, querying using SPARQL and forward chaining inference. It includes several machine learning algorithms for RDF knowledge graphs. The article describes the vision, architecture and use cases of SANSA.

    @InProceedings{lehmann-2017-sansa-iswc,
    Title = {Distributed {S}emantic {A}nalytics using the {SANSA} {S}tack},
    Author = {Lehmann, Jens and Sejdiu, Gezim and B\"uhmann, Lorenz and Westphal, Patrick and Stadler, Claus and Ermilov, Ivan and Bin, Simon and Chakraborty, Nilesh and Saleem, Muhammad and Ngonga, Axel-Cyrille Ngomo and Jabeen, Hajira},
    Booktitle = {Proceedings of 16th International Semantic Web Conference - Resources Track (ISWC'2017)},
    Year = {2017},
    Abstract = {Over the past decade, vast amounts of machine-readable structured information have become available through the automation of research processes as well as the increasing popularity of knowledge graphs and semantic technologies. A major research challenge today is to perform scalable analysis of large-scale knowledge graphs to facilitate applications like link prediction, knowledge base completion and question answering. Most analytics approaches, which scale horizontally (i.e., can be executed in a distributed environment) work on simple feature-vector-based input rather than more expressive knowledge structures. On the other hand, analytics methods which exploit expressive structures usually do not scale well to very large knowledge bases. This software framework paper describes the ongoing project Semantic Analytics Stack (SANSA) which supports expressive and scalable semantic analytics by providing functionality for distributed in-memory computing for RDF data. The library provides APIs for RDF storage, querying using SPARQL and forward chaining inference. It includes several machine learning algorithms for RDF knowledge graphs. The article describes the vision, architecture and use cases of SANSA.},
    Added-at = {2017-07-17T14:46:26.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/21ae18ac13750f9cf74227fe0a7c50104/aksw},
    Interhash = {eb99dff0ce6a9cdbce2c4cbea115fbee},
    Intrahash = {1ae18ac13750f9cf74227fe0a7c50104},
    Keywords = {2017 bde buehmann chakraborty group_aksw iermilov lehmann ngonga saleem sbin sejdiu stadler westphal},
    Owner = {iermilov},
    Timestamp = {2017-07-17T14:46:26.000+0200},
    Url = {http://svn.aksw.org/papers/2017/ISWC_SANSA_SoftwareFramework/public.pdf}
    }

  • B. D. Meester, A. Dimou, D. Kontokostas, R. Verborgh, J. Lehmann, E. Mannens, and S. Hellmann, “A Vocabulary Independent Generation Framework for DBpedia and beyond,” in Proceedings of 16th International Semantic Web Conference – Poster & Demos, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{meester-2017-dbpedia-rml-demo,
    Title = {A Vocabulary Independent Generation Framework for DBpedia and beyond},
    Author = {Meester, Ben De and Dimou, Anastasia and Kontokostas, Dimitris and Verborgh, Ruben and Lehmann, Jens and Mannens, Erik and Hellmann, Sebastian},
    Booktitle = {Proceedings of 16th International Semantic Web Conference - Poster \& Demos},
    Year = {2017},
    Added-at = {2017-08-31T16:25:32.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/265ab845959061c78f8e81c9fe1432150/aksw},
    Interhash = {8e30080f2f4edc6f492a906d5f2e9716},
    Intrahash = {65ab845959061c78f8e81c9fe1432150},
    Keywords = {2017 group_aksw hellmann iermilov lehmann mole},
    Timestamp = {2017-08-31T16:25:32.000+0200},
    Url = {http://jens-lehmann.org/files/2017/iswc_pd_dbpedia_rml.pdf}
    }

  • H. Petzka, C. Stadler, G. Katsimpras, B. Haarmann, and J. Lehmann, “Benchmarking Faceted Browsing Capabilities of Triplestores,” in 13th International Conference on Semantic Systems (SEMANTiCS 2017), September 11-14 2017, Amsterdam, Netherlands, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{petzka-semantics-facets,
    Title = {Benchmarking Faceted Browsing Capabilities of Triplestores},
    Author = {Petzka, Henning and Stadler, Claus and Katsimpras, Georgios and Haarmann, Bastian and Lehmann, Jens},
    Booktitle = {13th International Conference on Semantic Systems (SEMANTiCS 2017), September 11-14 2017, Amsterdam, Netherlands},
    Year = {2017},
    Added-at = {2017-08-31T16:24:48.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/24cba1ee5aff5a8f53760660aa977adfb/aksw},
    Interhash = {15573d7d66c8917d8f0f7adc05832b98},
    Intrahash = {4cba1ee5aff5a8f53760660aa977adfb},
    Keywords = {2017 group_aksw hobbit lehmann mole projecthobbit sda stadler},
    Timestamp = {2017-08-31T16:24:48.000+0200},
    Url = {http://jens-lehmann.org/files/2017/semantics_faceted_browsing.pdf}
    }

  • P. Trivedi, G. Maheshwari, M. Dubey, and J. Lehmann, “A Corpus for Complex Question Answering over Knowledge Graphs,” in Proceedings of 16th International Semantic Web Conference – Resources Track (ISWC’2017), 2017.
    [BibTeX] [Download PDF]
    @InProceedings{trivedi-2017-lcquad-iswc,
    Title = {A {C}orpus for {C}omplex {Q}uestion {A}nswering over {K}nowledge {G}raphs},
    Author = {Trivedi, Priyansh and Maheshwari, Gaurav and Dubey, Mohnish and Lehmann, Jens},
    Booktitle = {Proceedings of 16th {I}nternational {S}emantic {W}eb {C}onference - {R}esources {T}rack ({I}{S}{W}{C}'2017)},
    Year = {2017},
    Added-at = {2017-08-31T16:26:12.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/2c8941c5b239a812b8435a381ec8438e6/aksw},
    Interhash = {e73fac9e1b601440e0b12f05c6b1a2b1},
    Intrahash = {c8941c5b239a812b8435a381ec8438e6},
    Keywords = {2017 group_aksw lehmann mole},
    Timestamp = {2017-08-31T16:26:12.000+0200},
    Url = {http://jens-lehmann.org/files/2017/iswc_lcquad.pdf}
    }

  • H. Thakkar, Y. Keswani, M. Dubey, J. Lehmann, and S. Auer, “Trying Not to Die Benchmarking — Orchestrating RDF and Graph Data Management Solution Benchmarks Using LITMUS,” in 13th International Conference on Semantic Systems (SEMANTiCS 2017), September 11-14 2017, Amsterdam, Netherlands, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{thakkar-litmus-semantics,
    Title = {Trying {N}ot to {D}ie {B}enchmarking -- {O}rchestrating {R}{D}{F} and {G}raph {D}ata {M}anagement {S}olution Benchmarks Using LITMUS},
    Author = {Thakkar, Harsh and Keswani, Yashwant and Dubey, Mohnish and Lehmann, Jens and Auer, S??ren},
    Booktitle = {13th {I}nternational {C}onference on {S}emantic {S}ystems (SEMANTiCS 2017), {S}eptember 11-14 2017, {A}msterdam, {N}etherlands},
    Year = {2017},
    Added-at = {2017-08-31T16:24:47.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/2aa09fbf8be20376c7000f69a36db1484/aksw},
    Interhash = {d54fb52790cfe3553dc3b14e92db1d7c},
    Intrahash = {aa09fbf8be20376c7000f69a36db1484},
    Keywords = {2017 group_aksw hobbit lehmann mole projecthobbit sda},
    Timestamp = {2017-08-31T16:24:47.000+0200},
    Url = {http://jens-lehmann.org/files/2017/semantics_litmus.pdf}
    }

  • C. Stadler and J. Lehmann, “JPA Criteria Queries over RDF Data,” in Workshop on Querying the Web of Data co-located with the Extended Semantic Web Conference, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{quweda_jpa,
    Title = {JPA Criteria Queries over RDF Data},
    Author = {Claus Stadler and Jens Lehmann},
    Booktitle = {Workshop on Querying the Web of Data co-located with the Extended Semantic Web Conference},
    Year = {2017},
    Keywords = {2017 lehmann group_aksw MOLE stadler},
    Url = {http://jens-lehmann.org/files/2017/quweda_jpa.pdf}
    }

  • {. A. Sherif, A. {Ngonga Ngomo}, and J. Lehmann, “WOMBAT – A Generalization Approach for Automatic Link Discovery,” in 14th Extended Semantic Web Conference, Portorož, Slovenia, 28th May – 1st June 2017, 2017.
    [BibTeX] [Abstract] [Download PDF]
    A significant portion of the evolution of Linked Data datasets lies in updating the links to other datasets. An important challenge when aiming to update these links automatically under the open-world assumption is the fact that usually only positive examples for the links exist. We address this challenge by presenting and evaluating WOMBAT , a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. WOMBAT is based on generalisation via an upward refinement operator to traverse the space of link specification. We study the theoretical characteristics of WOMBAT and evaluate it on 8 different benchmark datasets. Our evaluation suggests that WOMBAT outperforms state-of-the-art supervised approaches while relying on less information. Moreover, our evaluation suggests that WOMBAT ????????s pruning algorithm allows it to scale well even on large datasets.

    @InProceedings{WOMBAT_2017,
    Title = {{WOMBAT} - {A Generalization Approach for Automatic Link Discovery}},
    Author = {Sherif, {Mohamed Ahmed} and {Ngonga Ngomo}, Axel-Cyrille and Lehmann, Jens},
    Booktitle = {14th Extended Semantic Web Conference, Portoro{\v{z}}, Slovenia, 28th May - 1st June 2017},
    Year = {2017},
    Publisher = {Springer},
    Abstract = {A significant portion of the evolution of Linked Data datasets lies in updating the links to other datasets. An important challenge when aiming to update these links automatically under the open-world assumption is the fact that usually only positive examples for the links exist. We address this challenge by presenting and evaluating WOMBAT , a novel approach for the discovery of links between knowledge bases that relies exclusively on positive examples. WOMBAT is based on generalisation via an upward refinement operator to traverse the space of link specification. We study the theoretical characteristics of WOMBAT and evaluate it on 8 different benchmark datasets. Our evaluation suggests that WOMBAT outperforms state-of-the-art supervised approaches while relying on less information. Moreover, our evaluation suggests that WOMBAT ????????s pruning algorithm allows it to scale well even on large datasets.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2017/ESWC_WOMBAT/public.pdf},
    Keywords = {2017 group_aksw sys:relevantFor:geoknow sys:relevantFor:infai sys:relevantFor:bis ngonga simba sherif group_aksw geoknow wombat lehmann MOLE},
    Url = {http://svn.aksw.org/papers/2017/ESWC_WOMBAT/public.pdf}
    }

  • J. Lehmann, S. Auer, S. Capadisli, K. Janowicz, C. Bizer, T. Heath, A. Hogan, and T. Berners-Lee, “LDOW2017: 10th Workshop on Linked Data on the Web,” in Proceedings of the 28th International Conference Companion on World Wide Web, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{lehmann2017ldow,
    Title = {LDOW2017: 10th Workshop on Linked Data on the Web},
    Author = {Lehmann, Jens and Auer, S{\"o}ren and Capadisli, Sarven and Janowicz, Krzysztof and Bizer, Christian and Heath, Tom and Hogan, Aidan and Berners-Lee, Tim},
    Booktitle = {Proceedings of the 28th International Conference Companion on World Wide Web},
    Year = {2017},
    Organization = {International World Wide Web Conferences Steering Committee},
    Keywords = {2017 event_www group_aksw sys:relevantFor:infai sys:relevantFor:bis lehmann MOLE},
    Url = {http://jens-lehmann.org/files/2017/ldow_10th_workshop.pdf}
    }

  • S. Kosovan, J. Lehmann, and A. Fischer, “Dialogue Response Generation using Neural Networks with Attention and Background Knowledge,” in Proceedings of the Computer Science Conference for University of Bonn Students (CSCUBS) 2017, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{kosovan-2017-cscubs-dialogues,
    Title = {Dialogue Response Generation using Neural Networks with Attention and Background Knowledge},
    Author = {Kosovan, Sofiia and Lehmann, Jens and Fischer, Asja},
    Booktitle = {Proceedings of the Computer Science Conference for University of Bonn Students (CSCUBS) 2017},
    Year = {2017},
    Added-at = {2017-08-31T16:24:45.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/2382e99967e196356de3f52f06df22115/aksw},
    Interhash = {919e1cac53865a68cbefe447f521bd50},
    Intrahash = {382e99967e196356de3f52f06df22115},
    Keywords = {2017 group_aksw lehmann mole},
    Notes = {Best Paper Award},
    Timestamp = {2017-08-31T16:24:45.000+0200},
    Url = {http://jens-lehmann.org/files/2017/cscubs_dialogues.pdf}
    }

  • F. Conrads, J. Lehmann, M. Saleem, and A. N. Ngomo, “Benchmarking RDF Storage Solutions with IGUANA,” in Proceedings of 16th International Semantic Web Conference – Poster & Demos, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{conrads-2017-iguana-demo,
    Title = {Benchmarking RDF Storage Solutions with IGUANA},
    Author = {Conrads, Felix and Lehmann, Jens and Saleem, Muhammad and Ngomo, Axel-Cyrille Ngonga},
    Booktitle = {Proceedings of 16th International Semantic Web Conference - Poster \& Demos},
    Year = {2017},
    Added-at = {2017-08-31T16:24:38.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/28b70350e9bacc3ea4e792ee5f7192057/aksw},
    Interhash = {c2033f81289fdcb2a676c3891677d2ea},
    Intrahash = {8b70350e9bacc3ea4e792ee5f7192057},
    Keywords = {2017 group_aksw iermilov lehmann mole ngonga},
    Timestamp = {2017-08-31T16:24:38.000+0200},
    Url = {http://jens-lehmann.org/files/2017/iswc_pd_iguana.pdf}
    }

  • F. Conrads, J. Lehmann, M. Saleem, M. Morsey, and A. and Ngonga Ngomo, “IGUANA: A Generic Framework for Benchmarking the Read-Write Performance of Triple Stores,” in International Semantic Web Conference (ISWC), 2017.
    [BibTeX] [Download PDF]
    @InProceedings{iguana2017,
    Title = {{IGUANA}: A Generic Framework for Benchmarking the Read-Write Performance of Triple Stores},
    Author = {Conrads, Felix and Lehmann, Jens and Saleem, Muhammad and Morsey, Mohamed and and Axel-Cyrille {Ngonga Ngomo}},
    Booktitle = {International Semantic Web Conference (ISWC)},
    Year = {2017},
    1 = {https://svn.aksw.org/papers/2017/ISWC_Iguana/public.pdf},
    Added-at = {2017-08-31T16:24:38.000+0200},
    Bdsk-url-1 = {https://svn.aksw.org/papers/2017/ISWC_Iguana/public.pdf},
    Biburl = {https://www.bibsonomy.org/bibtex/2ce7ad6992a774250addf110553fde64a/aksw},
    Interhash = {c639fbec7d6069b3d7ab82ddb75dcc28},
    Intrahash = {ce7ad6992a774250addf110553fde64a},
    Keywords = {group_aksw jens-lehmann lehmann mole ngonga saleem simba},
    Timestamp = {2017-08-31T16:24:38.000+0200},
    Url = {https://svn.aksw.org/papers/2017/ISWC_Iguana/public.pdf}
    }

  • S. Auer, S. Scerri, A. Versteden, E. Pauwels, A. Charalambidis, S. Konstantopoulos, J. Lehmann, H. Jabeen, I. Ermilov, G. Sejdiu, A. Ikonomopoulos, S. Andronopoulos, M. Vlachogiannis, C. Pappas, A. Davettas, I. A. Klampanos, E. Grigoropoulos, V. Karkaletsis, V. de Boer, R. Siebes, M. N. Mami, S. Albani, M. Lazzarini, P. Nunes, E. Angiuli, N. Pittaras, G. Giannakopoulos, G. Argyriou, G. Stamoulis, G. Papadakis, M. Koubarakis, P. Karampiperis, A. N. Ngomo, and M. Vidal, “The BigDataEurope Platform – Supporting the Variety Dimension of Big Data,” in 17th International Conference on Web Engineering (ICWE2017), 2017.
    [BibTeX] [Abstract] [Download PDF]
    The management and analysis of large-scale datasets — described with the term Big Data — involves the three classic dimensions volume, velocity and variety. While the former two are well supported by a plethora of software components, the variety dimension is still rather neglected. We present the BDE platform — an easy-to-deploy, easy-to-use and adaptable (cluster-based and standalone) platform for the execution of big data components and tools like Hadoop, Spark, Flink. The BDE platform was designed based upon the requirements gathered from the seven societal challenges put forward by the European Commission in the Horizon 2020 programme and targeted by the BigDataEurope pilots. As a result, the BDE platform allows to perform a variety of Big Data flow tasks like message passing (Kafka, Flume), storage (Hive, Cassandra) or publishing (GeoTriples). In order to facilitate the processing of heterogeneous data, a particular innovation of the platform is the semantic layer, which allows to directly process RDF data and to map and transform arbitrary data into RDF.

    @InProceedings{Auer+ICWE-2017,
    Title = {{T}he {B}ig{D}ata{E}urope {P}latform - {S}upporting the {V}ariety {D}imension of {B}ig {D}ata},
    Author = {S\"oren Auer and Simon Scerri and Aad Versteden and Erika Pauwels and Angelos Charalambidis and Stasinos Konstantopoulos and Jens Lehmann and Hajira Jabeen and Ivan Ermilov and Gezim Sejdiu and Andreas Ikonomopoulos and Spyros Andronopoulos and Mandy Vlachogiannis and Charalambos Pappas and Athanasios Davettas and Iraklis A. Klampanos and Efstathios Grigoropoulos and Vangelis Karkaletsis and Victor de Boer and Ronald Siebes and Mohamed Nadjib Mami and Sergio Albani and Michele Lazzarini and Paulo Nunes and Emanuele Angiuli and Nikiforos Pittaras and George Giannakopoulos and Giorgos Argyriou and George Stamoulis and George Papadakis and Manolis Koubarakis and Pythagoras Karampiperis and Axel-Cyrille Ngonga Ngomo and Maria-Esther Vidal},
    Booktitle = {17th International Conference on Web Engineering (ICWE2017)},
    Year = {2017},
    Abstract = {The management and analysis of large-scale datasets -- described with the term Big Data -- involves the three classic dimensions volume, velocity and variety. While the former two are well supported by a plethora of software components, the variety dimension is still rather neglected. We present the BDE platform -- an easy-to-deploy, easy-to-use and adaptable (cluster-based and standalone) platform for the execution of big data components and tools like Hadoop, Spark, Flink. The BDE platform was designed based upon the requirements gathered from the seven societal challenges put forward by the European Commission in the Horizon 2020 programme and targeted by the BigDataEurope pilots. As a result, the BDE platform allows to perform a variety of Big Data flow tasks like message passing (Kafka, Flume), storage (Hive, Cassandra) or publishing (GeoTriples). In order to facilitate the processing of heterogeneous data, a particular innovation of the platform is the semantic layer, which allows to directly process RDF data and to map and transform arbitrary data into RDF.},
    Bdsk-url-1 = {http://svn.aksw.org/lod2/Paper/ISWC2012-InUse_LOD2-Stack/public.pdf},
    Date-modified = {2012-12-02 12:25:29 +0000},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis 2017 auer iermilov ngonga lehmann bde MOLE},
    Url = {http://jens-lehmann.org/files/2017/icwe_bde.pdf}
    }

  • H. Jabeen, P. Archer, S. Scerri, A. Versteden, I. Ermilov, G. Mouchakis, J. Lehmann, and S. Auer, “Big Data Europe,” in Proceedings of the Workshops of the EDBT/ICDT 2017 Joint Conference, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{JabeenEtAl:EDBT/ICDT2017,
    Title = {Big Data Europe},
    Author = {Hajira Jabeen and Phil Archer and Simon Scerri and Aad Versteden and Ivan Ermilov and Giannis Mouchakis and Jens Lehmann and Soeren Auer},
    Booktitle = {Proceedings of the Workshops of the EDBT/ICDT 2017 Joint Conference},
    Year = {2017},
    Crossref = {EDBT/ICDT2017WS},
    Keywords = {2017 lehmann group_aksw MOLE hellmann sys:relevantFor:infai sys:relevantFor:bis bde},
    Url = {http://ceur-ws.org/Vol-1810/EuroPro_paper_05.pdf}
    }

  • I. Ermilov, J. Lehmann, G. Sejdiu, L. Bühmann, P. Westphal, C. Stadler, S. Bin, N. Chakraborty, H. Petzka, M. Saleem, A. N. Ngonga, and H. Jabeen, “The Tale of Sansa Spark,” in Proceedings of 16th International Semantic Web Conference, Poster & Demos, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{iermilov-2017-sansa-iswc-demo,
    Title = {The {T}ale of {S}ansa {S}park},
    Author = {Ermilov, Ivan and Lehmann, Jens and Sejdiu, Gezim and B\"uhmann, Lorenz and Westphal, Patrick and Stadler, Claus and Bin, Simon and Chakraborty, Nilesh and Petzka, Henning and Saleem, Muhammad and Ngonga, Axel-Cyrille Ngomo and Jabeen, Hajira},
    Booktitle = {Proceedings of 16th International Semantic Web Conference, Poster \& Demos},
    Year = {2017},
    Added-at = {2017-08-31T16:24:45.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/2f9b5a69afa4755944984ae63f59ad146/aksw},
    Interhash = {ebabfe08f697304b399c9b6b89f2829e},
    Intrahash = {f9b5a69afa4755944984ae63f59ad146},
    Keywords = {2017 bde buehmann chakraborty group_aksw iermilov lehmann mole ngonga saleem sbin sejdiu stadler westphal},
    Owner = {iermilov},
    Timestamp = {2017-08-31T16:24:45.000+0200},
    Url = {http://jens-lehmann.org/files/2017/iswc_pd_sansa.pdf}
    }

  • I. Ermilov, A. N. Ngomo, A. Versteden, H. Jabeen, G. Sejdiu, G. Argyriou, L. Selmi, J. Jakobitsch, and J. Lehmann, “Managing Lifecycle of Big Data Applications,” in KESW, 2017.
    [BibTeX] [Download PDF]
    @InProceedings{KESW_2017_BDE,
    Title = {Managing Lifecycle of Big Data Applications},
    Author = {Ermilov, Ivan and Ngomo, Axel-Cyrille Ngonga and Versteden, Aad and Jabeen, Hajira and Sejdiu, Gezim and Argyriou, Giorgos and Selmi, Luigi and Jakobitsch, J{\"u}rgen and Lehmann, Jens},
    Booktitle = {KESW},
    Year = {2017},
    Added-at = {2017-08-31T16:24:46.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/2f5ee59fb595ade7ece4c840ad4a95741/aksw},
    Interhash = {8ac92f717e75f88d59f2811ecf7b816e},
    Intrahash = {f5ee59fb595ade7ece4c840ad4a95741},
    Keywords = {bde group_aksw iermilov lehmann ngonga simba},
    Timestamp = {2017-08-31T16:24:46.000+0200},
    Url = {https://svn.aksw.org/papers/2017/KESW_BDE_Workflow/public.pdf}
    }

  • A. Ismayilov, D. Kontokostas, S. Auer, J. Lehmann, and S. Hellmann, “Wikidata through the Eyes of DBpedia,” Semantic Web Journal, pp. 1-11, 2017. doi:10.3233/SW-170277
    [BibTeX]
    @Article{ismayilov2017wikidata,
    Title = {Wikidata through the Eyes of DBpedia},
    Author = {Ismayilov, Ali and Kontokostas, Dimitris and Auer, S{\"o}ren and Lehmann, Jens and Hellmann, Sebastian},
    Journal = {Semantic Web Journal},
    Year = {2017},
    Pages = {1-11},
    Doi = {10.3233/SW-170277},
    Keywords = {2017 group_aksw lehmann sys:relevantFor:infai sys:relevantFor:bis MOLE }
    }

  • K. Höffner, S. Walter, E. Marx, R. Usbeck, J. Lehmann, and A. {{Ngonga Ngomo}}, “Survey on Challenges of Question Answering in the Semantic Web,” Semantic Web Journal, vol. 8, iss. 6, 2017.
    [BibTeX] [Download PDF]
    @Article{qa_survey_swj_2015,
    Title = {Survey on Challenges of {Q}uestion {A}nswering in the {S}emantic {W}eb},
    Author = {H\"offner, Konrad and Walter, Sebastian and Marx, Edgard and Usbeck, Ricardo and Lehmann, Jens and {{Ngonga Ngomo}}, Axel-Cyrille},
    Journal = {Semantic Web Journal},
    Year = {2017},
    Number = {6},
    Volume = {8},
    Added-at = {2017-08-31T16:25:00.000+0200},
    Biburl = {https://www.bibsonomy.org/bibtex/2f0401ca8adebe1b7edfa36255ea1140d/aksw},
    Interhash = {6bac844b30f9d1caf6bc72f57e43f7e5},
    Intrahash = {f0401ca8adebe1b7edfa36255ea1140d},
    Keywords = {diesel group_aksw hawk hoeffner lehmann marx mole ngonga openqa projecthobbit qamel simba sina tbsl usbeck},
    Timestamp = {2017-08-31T16:25:00.000+0200},
    Url = {http://www.semantic-web-journal.net/system/files/swj1375.pdf}
    }

  • D. Esteves, R. Peres, J. Lehmann, and G. Napolitano, “Named Entity Recognition in Twitter using Images and Text,” in 3rd International Workshop on Natural Language Processing for Informal Text (NLPIT 2017), 2017.
    [BibTeX] [Download PDF]
    @InProceedings{estevesNERshort2017,
    Title = {Named {E}ntity {R}ecognition in {T}witter using {I}mages and {T}ext},
    Author = {Diego Esteves and Rafael Peres and Jens Lehmann and Giulio Napolitano},
    Booktitle = {3rd International Workshop on Natural Language Processing for Informal Text (NLPIT 2017)},
    Year = {2017},
    Bdsk-url-1 = {https://www.researchgate.net/publication/317721565_Named_Entity_Recognition_in_Twitter_using_Images_and_Text},
    Keywords = {horus ner 2017 esteves napolitano lehmann sda},
    Url = {https://www.researchgate.net/publication/317721565_Named_Entity_Recognition_in_Twitter_using_Images_and_Text}
    }

  • D. Esteves, D. Moussallem, T. Soru, C. B. Neto, J. Lehmann, A. N. Ngomo, and J. C. Duarte, “LOG4MEX: A Library to Export Machine Learning Experiments,” in Proceedings of the International Conference on Web Intelligence, New York, NY, USA, 2017, pp. 139-145. doi:10.1145/3106426.3106530
    [BibTeX] [Download PDF]
    @InProceedings{Esteves:2017:LLE:3106426.3106530,
    Title = {{LOG4MEX}: {A} {L}ibrary to {E}xport {M}achine {L}earning {E}xperiments},
    Author = {Diego Esteves and Diego Moussallem and Tommaso Soru and Ciro Baron Neto and Jens Lehmann and Axel-Cyrille Ngonga Ngomo and Julio Cesar Duarte},
    Booktitle = {Proceedings of the International Conference on Web Intelligence},
    Year = {2017},
    Address = {New York, NY, USA},
    Pages = {139--145},
    Publisher = {ACM},
    Series = {WI '17},
    Acmid = {3106530},
    Bdsk-url-1 = {http://delivery.acm.org/10.1145/3110000/3106530/p139-esteves.pdf?ip=131.220.9.176&id=3106530&acc=ACTIVE%20SERVICE&key=2BA2C432AB83DA15%2E56D2680C9BA0337E%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&CFID=985093927&CFTOKEN=37686417&__acm__=1506603052_78e3b532beedad6215f69820947e6d95},
    Doi = {10.1145/3106426.3106530},
    ISBN = {978-1-4503-4951-2},
    Keywords = {LOG4MEX, sda, esteves, lehmann, 2017, logging, machine learning experiments, metadata, ontology, provenance, software architecture},
    Location = {Leipzig, Germany},
    Numpages = {7},
    Url = {http://doi.acm.org/10.1145/3106426.3106530}
    }

  • S. Bin, P. Westphal, J. Lehmann, and A. N. Ngonga, “Implementing Scalable Structured Machine Learning for Big Data in the SAKE Project,” in IEEE Big Data Conference 2017, 2017.
    [BibTeX] [Download PDF]
    @inproceedings{bin-2017-sake,
    added-at = {2017-11-17T14:26:26.000+0100},
    author = {Bin, Simon and Westphal, Patrick and Lehmann, Jens and Ngonga, Axel-Cyrille Ngomo},
    biburl = {https://www.bibsonomy.org/bibtex/224f107297aa2a27c82b875e63c9b9055/aksw},
    booktitle = {IEEE Big Data Conference 2017},
    interhash = {8ff7e69474050557c9f872c41433cc04},
    intrahash = {24f107297aa2a27c82b875e63c9b9055},
    keywords = {2017 bin group_aksw lehmann mole ngonga sake westphal},
    timestamp = {2017-11-17T14:26:26.000+0100},
    title = {Implementing Scalable Structured Machine Learning for Big Data in the SAKE Project},
    url = {http://jens-lehmann.org/files/2017/ieee_bigdata_sake.pdf},
    year = 2017
    }

2016

  • E. Marx, A. Zaveri, M. Mohammed, S. Rautenberg, J. Lehmann, A. N. Ngomo, and G. Cheng, “DBtrends : Publishing and Benchmarking RDF Ranking Functions,” in 2nd International Workshop on Summarizing and Presenting Entities and Ontologies, co-located with the 13th Extended Semantic Web Conference, 2016.
    [BibTeX] [Abstract] [Download PDF]
    Providing accurate approaches for keyword search or question answering to access the data available on the Linked Data Web is of central importance to ensure that it can be used by non-experts. In many cases, these approaches return a large number of results that need to be provided in the right order so as to be of relevance to the user. Achieving the goal of improving the access to the Linked Data Web thus demands the provision of ranking approaches that allow sorting potentially large number of results appropriately. While such functions have been designed in previous works, they have not been evaluated exhaustively. This work addresses this research gap by proposing a formal framework designed towards comparing and evaluating different ranking functions for RDF data. The framework allows combining these rankings by means of an extension of the Spearman’s footrule estimation of the upper bound of this function. We supply a benchmark with a total of 60 manually annotated entity ranks by users from USA and India recruited over Amazon Mechanical Turk. Moreover, we evaluated nine entity ranking functions over the proposed benchmark.

    @InProceedings{dbtrends2016,
    Title = {{DBtrends} : {P}ublishing and {B}enchmarking {RDF} {R}anking {F}unctions},
    Author = {Marx, Edgard and Zaveri, Amrapali and Mohammed, Mofeed and Rautenberg, Sandro and Lehmann, Jens and Ngomo, Axel-Cyrille Ngonga and Cheng, Gong},
    Booktitle = {2nd International Workshop on Summarizing and Presenting Entities and Ontologies, co-located with the 13th Extended Semantic Web Conference},
    Year = {2016},
    Abstract = {Providing accurate approaches for keyword search or question answering to access the data available on the Linked Data Web is of central importance to ensure that it can be used by non-experts. In many cases, these approaches return a large number of results that need to be provided in the right order so as to be of relevance to the user. Achieving the goal of improving the access to the Linked Data Web thus demands the provision of ranking approaches that allow sorting potentially large number of results appropriately. While such functions have been designed in previous works, they have not been evaluated exhaustively. This work addresses this research gap by proposing a formal framework designed towards comparing and evaluating different ranking functions for RDF data. The framework allows combining these rankings by means of an extension of the Spearman's footrule estimation of the upper bound of this function. We supply a benchmark with a total of 60 manually annotated entity ranks by users from USA and India recruited over Amazon Mechanical Turk. Moreover, we evaluated nine entity ranking functions over the proposed benchmark.},
    Keywords = {SIMBA group_aksw marx ngonga lehmann openqa zaveri mole smart hassan rautenberg 2016 projecthobbit},
    Url = {http://svn.aksw.org/papers/2016/SUMPRE_DBtrends_Benchmark/public.pdf}
    }

  • E. Marx, K. Höffner, S. Shekarpour, A. N. Ngomo, J. Lehmann, and S. Auer, “Exploring Term Networks for Semantic Search over RDF Knowledge Graphs,” in Metadata and Semantics Research: 10th International Conference, MTSR 2016, Göttingen, Germany, November 22-25, 2016, Proceedings, Cham, 2016, pp. 249-261. doi:10.1007/978-3-319-49157-8_22
    [BibTeX] [Download PDF]
    @InProceedings{marx/starpath/smart/mtsr/2016,
    Title = {Exploring Term Networks for Semantic Search over RDF Knowledge Graphs},
    Author = {Marx, Edgard and H{\"o}ffner, Konrad and Shekarpour, Saeedeh and Ngomo, Axel-Cyrille Ngonga and Lehmann, Jens and Auer, S{\"o}ren},
    Booktitle = {Metadata and Semantics Research: 10th International Conference, MTSR 2016, G{\"o}ttingen, Germany, November 22-25, 2016, Proceedings},
    Year = {2016},
    Address = {Cham},
    Pages = {249--261},
    Publisher = {Springer International Publishing},
    Doi = {10.1007/978-3-319-49157-8_22},
    ISBN = {978-3-319-49157-8},
    Keywords = {marx simba ngonga smart group_aksw hoeffner lehmann openqa mole MOLE},
    Url = {https://www.researchgate.net/publication/309700280_Exploring_Term_Networks_for_Semantic_Search_over_RDF_Knowledge_Graphs}
    }

  • D. Kontokostas, C. Mader, C. Dirschl, K. Eck, M. Leuthold, J. Lehmann, and S. Hellmann, “Semantically Enhanced Quality Assurance in the JURION Business Use Case,” in 13th International Conference, ESWC 2016, Heraklion, Crete, Greece, May 2016, 2016, pp. 661-676. doi:10.1007/978-3-319-34129-3_40
    [BibTeX] [Download PDF]
    @InProceedings{jurion_rdfunit,
    Title = {Semantically Enhanced Quality Assurance in the JURION Business Use Case},
    Author = {Dimitris Kontokostas and Christian Mader and Christian Dirschl and Katja Eck and Michael Leuthold and Jens Lehmann and Sebastian Hellmann},
    Booktitle = {13th International Conference, ESWC 2016, Heraklion, Crete, Greece, May 2016},
    Year = {2016},
    Pages = {661--676},
    Doi = {10.1007/978-3-319-34129-3_40},
    ISBN = {978-3-319-34129-3},
    Keywords = {group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:aligned 2016 hellmann kilt_publications kontokostas rdfunit Lidmole mole aligned-project},
    Url = {http://svn.aksw.org/papers/2016/ESWC_Jurion/public.pdf}
    }

  • G. Rizzo, N. Fanizzi, J. Lehmann, and L. Bühmann, “Integrating New Refinement Operators in Terminological Decision Trees Learning,” in Knowledge Engineering and Knowledge Management: 20th International Conference, EKAW 2016, Bologna, Italy, November 19-23, 2016, Proceedings, 2016, pp. 511-526.
    [BibTeX] [Download PDF]
    @InProceedings{rizzo2016integrating,
    Title = {Integrating New Refinement Operators in Terminological Decision Trees Learning},
    Author = {Rizzo, Giuseppe and Fanizzi, Nicola and Lehmann, Jens and B{\"u}hmann, Lorenz},
    Booktitle = {Knowledge Engineering and Knowledge Management: 20th International Conference, EKAW 2016, Bologna, Italy, November 19-23, 2016, Proceedings},
    Year = {2016},
    Organization = {Springer},
    Pages = {511--526},
    Keywords = {lehmann MOLE group_aksw sys:relevantFor:infai sys:relevantFor:bis 2016 buehmann},
    Url = {http://jens-lehmann.org/files/2016/ekaw_terminological_decision_trees.pdf}
    }

  • M. A. Sherif, M. Hassan, T. Soru, A. Ngonga Ngomo, and J. Lehmann, “Lion’s Den: Feeding the LinkLion,” in Proceedings of Ontology Matching Workshop, 2016.
    [BibTeX] [Download PDF]
    @InProceedings{lionsden16,
    Title = {Lion's Den: Feeding the LinkLion},
    Author = {Mohamed Ahmed Sherif and Mofeed Hassan and Tommaso Soru and Axel-Cyrille {Ngonga Ngomo} and Jens Lehmann},
    Booktitle = {Proceedings of Ontology Matching Workshop},
    Year = {2016},
    Keywords = {sherif hassan soru lehmann ngonga geoknow group_aksw SIMBA sys:relevantFor:infai sys:relevantFor:bis limes},
    Owner = {sherif},
    Timestamp = {2016.09.26},
    Url = {http://disi.unitn.it/~pavel/om2016/papers/om2016_poster5.pdf}
    }

  • A. Zaveri, A. Rula, A. Maurino, R. Pietrobon, J. Lehmann, and S. Auer, “Quality Assessment for Linked Data,” Semantic Web Journal, vol. 7, iss. 1, pp. 63-93, 2016.
    [BibTeX] [Download PDF]
    @Article{Zaveri2016,
    Title = {Quality Assessment for Linked Data},
    Author = {Zaveri, Amrapali and Rula, Anisa and Maurino, Andrea and Pietrobon, Ricardo and Lehmann, Jens and Auer, S{\"o}ren},
    Journal = {Semantic Web Journal},
    Year = {2016},
    Number = {1},
    Pages = {63--93},
    Volume = {7},
    Bdsk-url-1 = {http://www.semantic-web-journal.net/content/quality-assessment-linked-data-survey},
    Timestamp = {2017.10.12},
    Url = {http://www.semantic-web-journal.net/content/quality-assessment-linked-data-survey}
    }

  • H. Thakkar, M. Dubey, G. Sejdiu, A. Ngonga Ngomo, J. Debattista, C. Lange, J. Lehmann, S. Auer, and M. Vidal, “LITMUS: An Open Extensible Framework for Benchmarking RDF Data Management Solutions,” , 2016.
    [BibTeX]
    @Other{ThakkarEtAl:LITMUS16,
    Title = {{LITMUS}: {A}n {O}pen {E}xtensible {F}ramework for {B}enchmarking {RDF} {D}ata {M}anagement {S}olutions},
    Author = {Harsh Thakkar and Mohnish Dubey and Gezim Sejdiu and Ngonga Ngomo, Axel-Cyrille and Jeremy Debattista and Christoph Lange and Jens Lehmann and S{\"o}ren Auer and Maria-Esther Vidal},
    Date = {2016-08-09},
    Eprint = {1608.02800},
    Eprintclass = {cs.PF},
    Eprinttype = {arxiv},
    File = {http://arxiv.org/pdf/1608.02800},
    Pubs = {clange,vidal},
    Year = {2016}
    }

  • H. Jabeen and J. Lehmann, Distributed Big Data platform for Life Sciences, 2016.
    [BibTeX]
    @Misc{Jab_p,
    Title = {Distributed Big Data platform for Life Sciences},
    Author = {Hajira Jabeen and Jens Lehmann},
    Note = {KAUST Research Conference on Computational and experimental interfaces of Big Data and Biotechnology, 25 - 27 January, King Abdullah University of Science and Technology, KSA},
    Year = {2016}
    }

  • U. U. Hassan, E. Curry, A. Zaveri, E. Marx, and J. Lehmann, “ACRyLIQ: Leveraging DBpedia for Adaptive Crowdsourcing in Linked Data Quality Assessment,” in 20th International Conference on Knowledge Engineering and Knowledge Management (EKAW), November 19-23, 2016, Bologna, Italy, 2016.
    [BibTeX]
    @InProceedings{umair/2016,
    Title = {{ACRyLIQ}: {L}everaging {DB}pedia for {A}daptive {C}rowdsourcing in {L}inked {D}ata {Q}uality {A}ssessment},
    Author = {Hassan, Umair Ul and Curry, Edward and Zaveri, Amrapali and Marx, Edgard and Lehmann, Jens},
    Booktitle = {20th International Conference on Knowledge Engineering and Knowledge Management (EKAW), November 19-23, 2016, Bologna, Italy},
    Year = {2016},
    Series = {EKAW 2016},
    Biburl = {http://www.bibsonomy.org/bibtex/2cbea4b8d22c87292c6e4405e7f1e60f2/aksw},
    Interhash = {e3b34694cb7f4c06ce1d782ff3d20d44},
    Intrahash = {cbea4b8d22c87292c6e4405e7f1e60f2},
    Keywords = {MOLE group_aksw lehmann marx mole simba zaveri}
    }

  • S. Bin, L. Bühmann, J. Lehmann, and A. {Ngonga Ngomo}, “Towards SPARQL-Based Induction for Large-Scale RDF Data sets,” in ECAI 2016 – Proceedings of the 22nd European Conference on Artificial Intelligence, 2016, pp. 1551-1552. doi:10.3233/978-1-61499-672-9-1551
    [BibTeX] [Download PDF]
    @InProceedings{sparqllearner,
    Title = {Towards {SPARQL}-Based Induction for Large-Scale {RDF} Data sets},
    Author = {Bin, Simon and B{\"u}hmann, Lorenz and Lehmann, Jens and {Ngonga Ngomo}, Axel-Cyrille},
    Booktitle = {ECAI 2016 - Proceedings of the 22nd European Conference on Artificial Intelligence},
    Year = {2016},
    Editor = {Kaminka, Gal A. and Fox, Maria and Bouquet, Paolo and H{\"u}llermeier, Eyke and Dignum, Virginia and Dignum, Frank and van Harmelen, Frank},
    Pages = {1551--1552},
    Publisher = {IOS Press},
    Series = {Frontiers in Artificial Intelligence and Applications},
    Volume = {285},
    Doi = {10.3233/978-1-61499-672-9-1551},
    ISBN = {978-1-61499-672-9},
    Keywords = {2016 sbin buehmann lehmann ngonga sake group_aksw dllearner},
    Language = {English},
    Url = {http://svn.aksw.org/papers/2016/ECAI_SPARQL_Learner/public.pdf}
    }

  • L. Bühmann, J. Lehmann, and P. Westphal, “DL-Learner – A framework for inductive learning on the Semantic Web,” Web Semantics: Science, Services and Agents on the World Wide Web, vol. 39, pp. 15-24, 2016. doi:http://dx.doi.org/10.1016/j.websem.2016.06.001
    [BibTeX] [Abstract] [Download PDF]
    Abstract In this system paper, we describe the DL-Learner framework, which supports supervised machine learning using \{OWL\} and \{RDF\} for background knowledge representation. It can be beneficial in various data and schema analysis tasks with applications in different standard machine learning scenarios, e.g. in the life sciences, as well as Semantic Web specific applications such as ontology learning and enrichment. Since its creation in 2007, it has become the main \{OWL\} and RDF-based software framework for supervised structured machine learning and includes several algorithm implementations, usage examples and has applications building on top of the framework. The article gives an overview of the framework with a focus on algorithms and use cases.

    @Article{Buehmann2016,
    Title = {DL-Learner - A framework for inductive learning on the Semantic Web },
    Author = {Lorenz B{\"u}hmann and Jens Lehmann and Patrick Westphal},
    Journal = {Web Semantics: Science, Services and Agents on the World Wide Web },
    Year = {2016},
    Pages = {15 - 24},
    Volume = {39},
    Abstract = {Abstract In this system paper, we describe the DL-Learner framework, which supports supervised machine learning using \{OWL\} and \{RDF\} for background knowledge representation. It can be beneficial in various data and schema analysis tasks with applications in different standard machine learning scenarios, e.g. in the life sciences, as well as Semantic Web specific applications such as ontology learning and enrichment. Since its creation in 2007, it has become the main \{OWL\} and RDF-based software framework for supervised structured machine learning and includes several algorithm implementations, usage examples and has applications building on top of the framework. The article gives an overview of the framework with a focus on algorithms and use cases.},
    Doi = {http://dx.doi.org/10.1016/j.websem.2016.06.001},
    ISSN = {1570-8268},
    Keywords = {dllearner group_aksw group_mole mole buehmann lehmann westphal dllearner sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lmol MOLE},
    Owner = {me},
    Timestamp = {2016.10.13},
    Url = {http://www.sciencedirect.com/science/article/pii/S157082681630018X}
    }

  • M. Dubey, S. Dasgupta, A. Sharma, K. Hoffner, and J. Lehmann, “AskNow: A Framework for Natural Language Query Formalization in SPARQL,” in Proc. of the Extended Semantic Web Conference 2016, 2016.
    [BibTeX] [Download PDF]
    @InProceedings{eswc_asknow,
    Title = {AskNow: A Framework for Natural Language Query Formalization in SPARQL},
    Author = {Mohnish Dubey and Sourish Dasgupta and Ankit Sharma and Konrad Hoffner and Jens Lehmann},
    Booktitle = {Proc. of the Extended Semantic Web Conference 2016},
    Year = {2016},
    Keywords = {group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:aligned 2016 lehmann hoeffner},
    Url = {http://jens-lehmann.org/files/2016/eswc_asknow.pdf}
    }

  • I. Ermilov, J. Lehmann, M. Martin, and S. Auer, “LODStats: The Data Web Census Dataset,” in Proceedings of 15th International Semantic Web Conference – Resources Track (ISWC’2016), 2016.
    [BibTeX] [Abstract] [Download PDF]
    Over the past years, the size of the Data Web has increased significantly, which makes obtaining general insights into its growth and structure both more challenging and more desirable. The lack of such insights hinders important data management tasks such as quality, privacy and coverage analysis. In this paper, we present LODStats, which provides a comprehensive picture of the current state of a significant part of the Data Web. LODStats integrates RDF datasets from data.gov, publicdata.eu and datahub.io data catalogs and at the time of writing lists over 9 000 RDF datasets. For each RDF dataset, LODStats collects comprehensive statistics and makes these available in adhering to the LDSO vocabulary. This analysis has been regularly published and enhanced over the past four years at the public platform lodstats.aksw.org. We give a comprehensive overview over the resulting dataset.

    @InProceedings{iermilov-2016-lodstats-iswc,
    Title = {LODStats: The Data Web Census Dataset},
    Author = {Ivan Ermilov and Jens Lehmann and Michael Martin and S\"oren Auer},
    Booktitle = {Proceedings of 15th International Semantic Web Conference - Resources Track (ISWC'2016)},
    Year = {2016},
    Abstract = {Over the past years, the size of the Data Web has increased significantly, which makes obtaining general insights into its growth and structure both more challenging and more desirable. The lack of such insights hinders important data management tasks such as quality, privacy and coverage analysis. In this paper, we present LODStats, which provides a comprehensive picture of the current state of a significant part of the Data Web. LODStats integrates RDF datasets from data.gov, publicdata.eu and datahub.io data catalogs and at the time of writing lists over 9 000 RDF datasets. For each RDF dataset, LODStats collects comprehensive statistics and makes these available in adhering to the LDSO vocabulary. This analysis has been regularly published and enhanced over the past four years at the public platform lodstats.aksw.org. We give a comprehensive overview over the resulting dataset.},
    Keywords = {2016 simba group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:leds iermilov auer martin lehmann bde leds lodstats},
    Owner = {iermilov},
    Timestamp = {2016.07.14},
    Url = {http://svn.aksw.org/papers/2016/ISWC_LODStats_Resource_Description/public.pdf}
    }

  • K. Höffner, J. Lehmann, and R. Usbeck, “CubeQA—Question Answering on RDF Data Cubes,” in Proceedings of the 15th International Semantic Web Conference (ISWC2016), 2016.
    [BibTeX] [Download PDF]
    @InProceedings{cubeqa,
    Title = {{CubeQA}---{Q}uestion {A}nswering on {RDF} {D}ata {C}ubes},
    Author = {Konrad H{\"o}ffner and Jens Lehmann and Ricardo Usbeck},
    Booktitle = {Proceedings of the 15th International Semantic Web Conference (ISWC2016)},
    Year = {2016},
    Keywords = {hoeffner lehmann usbeck group_aksw sys:relevantFor:infai sys:relevantFor:bis simba},
    Owner = {konrad},
    Url = {http://svn.aksw.org/papers/2016/ISWC_cubeqa/}
    }

  • D. Esteves, P. N. Mendes, D. Moussallem, J. C. Duarte, A. Zaveri, J. Lehmann, C. B. Neto, I. Costa, and M. C. Cavalcanti, “MEX Interfaces: Automating Machine Learning Metadata Generation,” in 12th International Conference on Semantic Systems (SEMANTiCS 2016), 12-15 September 2016, Leipzig, Germany, 2016.
    [BibTeX] [Abstract] [Download PDF]
    Despite recent efforts to achieve a high level of interoperability of Machine Learning (ML) experiments, positively collaborating with the Reproducible Research context, we still run into the problems created due to the existence of different ML platforms: each of those have a specific conceptualization or schema for representing data and metadata. This scenario leads to an extra coding-effort to achieve both the desired interoperability and a better provenance level as well as a more automatized environment for obtaining the generated results. Hence, when using ML libraries, it is a common task to re-design specific data models (schemata) and develop wrappers to manage the produced outputs. In this article, we discuss this gap focusing on the solution for the question: “What is the cleanest and lowest-impact solution to achieve both higher interoperability and provenance metadata levels in the Integrated Development Environments (IDE) context and how to facilitate the inherent data querying task?”. We introduce a novel and low impact methodology specifically designed for code built in that context, combining semantic web concepts and reflection in order to minimize the gap for exporting ML metadata in a structured manner, allowing embedded code annotations that are, in run-time, converted in one of the state-of-the-art ML schemas for the Semantic Web: the MEX Vocabulary.

    @InProceedings{estevesMEX2016,
    Title = {{MEX} {I}nterfaces: {A}utomating {M}achine {L}earning {M}etadata {G}eneration},
    Author = {Diego Esteves and Pablo N. Mendes and Diego Moussallem and Julio Cesar Duarte and Amrapali Zaveri and Jens Lehmann and Ciro Baron Neto and Igor Costa and Maria Claudia Cavalcanti},
    Booktitle = {12th {I}nternational {C}onference on {S}emantic {S}ystems (SEMANTiCS 2016), 12-15 September 2016, Leipzig, Germany},
    Year = {2016},
    Abstract = {Despite recent efforts to achieve a high level of interoperability of Machine Learning (ML) experiments, positively collaborating with the Reproducible Research context, we still run into the problems created due to the existence of different ML platforms: each of those have a specific conceptualization or schema for representing data and metadata. This scenario leads to an extra coding-effort to achieve both the desired interoperability and a better provenance level as well as a more automatized environment for obtaining the generated results. Hence, when using ML libraries, it is a common task to re-design specific data models (schemata) and develop wrappers to manage the produced outputs. In this article, we discuss this gap focusing on the solution for the question: ``What is the cleanest and lowest-impact solution to achieve both higher interoperability and provenance metadata levels in the Integrated Development Environments (IDE) context and how to facilitate the inherent data querying task?''. We introduce a novel and low impact methodology specifically designed for code built in that context, combining semantic web concepts and reflection in order to minimize the gap for exporting ML metadata in a structured manner, allowing embedded code annotations that are, in run-time, converted in one of the state-of-the-art ML schemas for the Semantic Web: the MEX Vocabulary.},
    Bdsk-url-1 = {https://www.researchgate.net/publication/305143958_MEX_InterfacesAutomating_Machine_Learning_Metadata_Generation},
    Keywords = {mex 2016 sys:relevantFor:infai sys:relevantFor:bis hobbit projecthobbit esteves baron group_aksw lehmann sda mole moussallem MOLE},
    Url = {https://www.researchgate.net/publication/305143958_MEX_InterfacesAutomating_Machine_Learning_Metadata_Generation}
    }

  • M. Acosta, A. Zaveri, E. Simperl, D. Kontokostas, F. Flöck, and J. Lehmann, “Detecting Linked Data Quality Issues via Crowdsourcing: A DBpedia Study,” Semantic Web Journal, 2016.
    [BibTeX] [Download PDF]
    @Article{acosta2016detecting,
    Title = {Detecting Linked Data Quality Issues via Crowdsourcing: A DBpedia Study},
    Author = {Acosta, Maribel and Zaveri, Amrapali and Simperl, Elena and Kontokostas, Dimitris and Fl{\"o}ck, Fabian and Lehmann, Jens},
    Journal = {Semantic Web Journal},
    Year = {2016},
    Keywords = {2016 event_eswc group_aksw sys:relevantFor:infai sys:relevantFor:bis lehmann MOLE},
    Url = {http://www.semantic-web-journal.net/system/files/swj1293.pdf}
    }

2015

  • J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, M. Morsey, P. van Kleef, S. Auer, and C. Bizer, “DBpedia – A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia,” Semantic Web Journal, vol. 6, iss. 2, pp. 167-195, 2015.
    [BibTeX] [Download PDF]
    @Article{dbpedia-swj,
    Title = {{DBpedia} - A Large-scale, Multilingual Knowledge Base Extracted from Wikipedia},
    Author = {Jens Lehmann and Robert Isele and Max Jakob and Anja Jentzsch and Dimitris Kontokostas and Pablo N. Mendes and Sebastian Hellmann and Mohamed Morsey and Patrick van Kleef and S{\"o}ren Auer and Christian Bizer},
    Journal = {Semantic Web Journal},
    Year = {2015},
    Number = {2},
    Pages = {167--195},
    Volume = {6},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/swj_dbpedia.pdf},
    Keywords = {2014 group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow topic_Classification lod2page lehmann kontokostas geoknow},
    Url = {http://jens-lehmann.org/files/2014/swj_dbpedia.pdf}
    }

  • J. Lehmann and O. Corcho, “2nd Special Issue on Linked Dataset Descriptions,” Semantic Web Journal, 2015.
    [BibTeX]
    @Article{ld_datasets_editorial,
    Title = {2nd Special Issue on Linked Dataset Descriptions},
    Author = {Jens Lehmann and Oscar Corcho},
    Journal = {Semantic Web Journal},
    Year = {2015},
    Keywords = {2015 group_aksw lehmann sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow MOLE geoknow}
    }

  • J. Lehmann, S. Athanasiou, A. Both, A. Garcia-Rojas, G. Giannopoulos, D. Hladky, K. Hoeffner, J. J. L. Grange, A. N. Ngomo, M. A. Sherif, C. Stadler, M. Wauer, P. Westphal, and V. Zaslawski, “Managing Geospatial Linked Data in the GeoKnow Project.” , 2015, pp. 51-78.
    [BibTeX] [Download PDF]
    @InBook{ios_geoknow_chapter,
    Title = {Managing Geospatial Linked Data in the GeoKnow Project},
    Author = {Jens Lehmann and Spiros Athanasiou and Andreas Both and Alejandra Garcia-Rojas and Giorgos Giannopoulos and Daniel Hladky and Konrad Hoeffner and Jon Jay Le Grange and Axel-Cyrille Ngonga Ngomo and Mohamed Ahmed Sherif and Claus Stadler and Matthias Wauer and Patrick Westphal and Vadim Zaslawski},
    Pages = {51--78},
    Year = {2015},
    Series = {Studies on the Semantic Web},
    Keywords = {2015 group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow lehmann ngonga MOLE sherif hoeffner geoknow wauer westphal},
    Url = {http://jens-lehmann.org/files/2015/ios_geoknow_chapter.pdf}
    }

  • J. Lehmann, S. Athanasiou, A. Both, L. Buehmann, A. Garcia-Rojas, G. Giannopoulos, D. Hladky, K. Hoeffner, J. J. L. Grange, A. N. Ngomo, R. Pietzsch, R. Isele, M. A. Sherif, C. Stadler, M. Wauer, and P. Westphal, “The GeoKnow Handbook,” 2015.
    [BibTeX] [Download PDF]
    @TechReport{geoknow_handbook,
    Title = {The {G}eo{K}now Handbook},
    Author = {Jens Lehmann and Spiros Athanasiou and Andreas Both and Lorenz Buehmann and Alejandra Garcia-Rojas and Giorgos Giannopoulos and Daniel Hladky and Konrad Hoeffner and Jon Jay Le Grange and Axel-Cyrille Ngonga Ngomo and Rene Pietzsch and Robert Isele and Mohamed Ahmed Sherif and Claus Stadler and Matthias Wauer and Patrick Westphal},
    Year = {2015},
    Keywords = {2015 group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow lehmann ngonga MOLE sherif hoeffner geoknow westphal buehmann},
    Url = {http://jens-lehmann.org/files/2015/geoknow_handbook.pdf}
    }

  • E. Marx, T. Soru, D. Esteves, A. Ngonga Ngomo, and J. Lehmann, “An Open Question Answering Framework,” in The 14th International Semantic Web Conference, Posters & Demonstrations Track, 2015.
    [BibTeX]
    @InProceedings{openqa2015,
    Title = {An {O}pen {Q}uestion {A}nswering {F}ramework},
    Author = {Edgard Marx and Tommaso Soru and Diego Esteves and Axel-Cyrille {Ngonga Ngomo} and Jens Lehmann},
    Booktitle = {The 14th International Semantic Web Conference, Posters \& Demonstrations Track},
    Year = {2015},
    Keywords = {SIMBA group_aksw marx ngonga smart lehmann openqa esteves mole soru 2015},
    Owner = {marx}
    }

  • {. A. Sherif, A. {Ngonga Ngomo}, and J. Lehmann, “Automating RDF Dataset Transformation and Enrichment,” in 12th Extended Semantic Web Conference, Portorož, Slovenia, 31st May – 4th June 2015, 2015.
    [BibTeX] [Abstract] [Download PDF]
    With the adoption of RDF across several domains, come growing requirements pertaining to the completeness and quality of RDF datasets. Currently, this problem is most commonly addressed by manually devising means of enriching an input dataset. The few tools that aim at supporting this endeavour usually focus on supporting the manual definition of enrichment pipelines. In this paper, we present a supervised learning approach based on a refinement operator for enriching RDF datasets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against eight manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples.

    @InProceedings{DEER_2015,
    Title = {Automating {RDF} Dataset Transformation and Enrichment},
    Author = {Sherif, {Mohamed Ahmed} and {Ngonga Ngomo}, Axel-Cyrille and Lehmann, Jens},
    Booktitle = {12th Extended Semantic Web Conference, Portoro{\v{z}}, Slovenia, 31st May - 4th June 2015},
    Year = {2015},
    Publisher = {Springer},
    Abstract = {With the adoption of RDF across several domains, come growing requirements pertaining to the completeness and quality of RDF datasets. Currently, this problem is most commonly addressed by manually devising means of enriching an input dataset. The few tools that aim at supporting this endeavour usually focus on supporting the manual definition of enrichment pipelines. In this paper, we present a supervised learning approach based on a refinement operator for enriching RDF datasets. We show how we can use exemplary descriptions of enriched resources to generate accurate enrichment pipelines. We evaluate our approach against eight manually defined enrichment pipelines and show that our approach can learn accurate pipelines even when provided with a small number of training examples.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2015/ESWC_DEER/public.pdf},
    Keywords = {2015 group_aksw sys:relevantFor:geoknow sys:relevantFor:infai sys:relevantFor:bis ngonga simba sherif group_aksw geoknow deer lehmann MOLE},
    Url = {http://svn.aksw.org/papers/2015/ESWC_DEER/public.pdf}
    }

  • A. Zaveri, A. Rula, A. Maurino, R. Pietrobon, J. Lehmann, and S. Auer, “Quality Assessment for Linked Data: A Survey,” Semantic Web Journal, 2015.
    [BibTeX] [Download PDF]
    @Article{Zaveri2012:LODQ,
    Title = {Quality Assessment for {L}inked {D}ata: {A} Survey},
    Author = {Zaveri, Amrapali and Rula, Anisa and Maurino, Andrea and Pietrobon, Ricardo and Lehmann, Jens and Auer, S{\"o}ren},
    Journal = {Semantic Web Journal},
    Year = {2015},
    Bdsk-url-1 = {http://www.semantic-web-journal.net/content/quality-assessment-methodologies-linked-open-data},
    Date-modified = {2015-02-06 10:39:04 +0000},
    Keywords = {2014 group_aksw zaveri pietrobon auer lehmann sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page MOLE dataquality},
    Url = {http://www.semantic-web-journal.net/content/quality-assessment-linked-data-survey}
    }

  • C. Stadler, J. Unbehauen, P. Westphal, M. A. Sherif, and J. Lehmann, “Simplified RDB2RDF Mapping,” in Proceedings of the 8th Workshop on Linked Data on the Web (LDOW2015), Florence, Italy, 2015.
    [BibTeX] [Abstract] [Download PDF]
    The combination of the advantages of widely used relational databases and semantic technologies has attracted significant research over the past decade. In particular, mapping languages for the conversion of databases to RDF knowledge bases have been developed and standardized in the form of R2RML. In this article, we first review those mapping languages and then devise work towards a unified formal model for them. Based on this, we present the Sparqlification Mapping Language (SML), which provides an intuitive way to declare mappings based on SQL VIEWS and SPARQL construct queries. We show that SML has the same expressivity as R2RML by enumerating the language features and show the correspondences, and we outline how one syntax can be converted into the other. A conducted user study for this paper juxtaposing SML and R2RML provides evidence that SML is a more compact syntax which is easier to understand and read and thus lowers the barrier to offer SPARQL access to relational databases.

    @InProceedings{sml,
    Title = {Simplified {RDB2RDF} Mapping},
    Author = {Claus Stadler and Joerg Unbehauen and Patrick Westphal and Mohamed Ahmed Sherif and Jens Lehmann},
    Booktitle = {Proceedings of the 8th Workshop on Linked Data on the Web (LDOW2015), Florence, Italy},
    Year = {2015},
    Abstract = {The combination of the advantages of widely used relational databases and semantic technologies has attracted significant research over the past decade. In particular, mapping languages for the conversion of databases to RDF knowledge bases have been developed and standardized in the form of R2RML. In this article, we first review those mapping languages and then devise work towards a unified formal model for them. Based on this, we present the Sparqlification Mapping Language (SML), which provides an intuitive way to declare mappings based on SQL VIEWS and SPARQL construct queries. We show that SML has the same expressivity as R2RML by enumerating the language features and show the correspondences, and we outline how one syntax can be converted into the other. A conducted user study for this paper juxtaposing SML and R2RML provides evidence that SML is a more compact syntax which is easier to understand and read and thus lowers the barrier to offer SPARQL access to relational databases.},
    Bdsk-url-1 = {svn.aksw.org/papers/2015/LDOW_SML/paper-camery-ready_public.pdf},
    Keywords = {2015 group_aksw group_mole mole stadler lehmann sherif sys:relevantFor:geoknow geoknow peer-reviewed MOLE westphal},
    Url = {svn.aksw.org/papers/2015/LDOW_SML/paper-camery-ready_public.pdf}
    }

  • R. Speck, D. Esteves, J. Lehmann, and A. Ngonga Ngomo, “DeFacto – A Multilingual Fact Validation Interface,” in 14th International Semantic Web Conference (ISWC 2015), 11-15 October 2015, Bethlehem, Pennsylvania, USA (Semantic Web Challenge Proceedings), 2015.
    [BibTeX] [Abstract] [Download PDF]
    The curation of a knowledge base is a key task for ensuring the correctness and traceability of the knowledge provided in the said knowledge. This task is often carried out manually by human curators, who attempt to provide reliable facts and their respective sources in a three-step process: issuing appropriate keyword queries for the fact to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. However, this process is very time-consuming, mainly due to the human curators having to scrutinize the web pages retrieved by search engines. This demo paper demonstrate the RESTful implementation for DeFacto (Deep Fact Validation) ???????? an approach able to validate facts in RDF by finding trustworthy sources for them on the Web. DeFacto aims to support the validation of facts by supplying the user with (1) relevant excerpts of web pages as well as (2) useful additional information including (3) a score for the confidence DeFacto has in the correctness of the input fact. To achieve this goal, DeFacto collects and combines evidence from web pages written in several languages. We also provide an extension for finding similar resources obtained from the Linked Data, using the sameas.org service as backend. In addition, DeFacto provides support for facts with a temporal scope, i.e., it can estimate the time frame within which a fact was valid.

    @InProceedings{defactorest,
    Title = {De{F}acto - {A} {M}ultilingual {F}act {V}alidation {I}nterface},
    Author = {Ren{\'e} Speck and Diego Esteves and Jens Lehmann and Axel-Cyrille {Ngonga Ngomo}},
    Booktitle = {14th International Semantic Web Conference (ISWC 2015), 11-15 October 2015, Bethlehem, Pennsylvania, USA (Semantic Web Challenge Proceedings)},
    Year = {2015},
    Editor = {Sean Bechhofer and Kostis Kyzirakos},
    Note = {Semantic Web Challenge, International Semantic Web Conference 2015},
    Abstract = {The curation of a knowledge base is a key task for ensuring the correctness and traceability of the knowledge provided in the said knowledge. This task is often carried out manually by human curators, who attempt to provide reliable facts and their respective sources in a three-step process: issuing appropriate keyword queries for the fact to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. However, this process is very time-consuming, mainly due to the human curators having to scrutinize the web pages retrieved by search engines. This demo paper demonstrate the RESTful implementation for DeFacto (Deep Fact Validation) ???????? an approach able to validate facts in RDF by finding trustworthy sources for them on the Web. DeFacto aims to support the validation of facts by supplying the user with (1) relevant excerpts of web pages as well as (2) useful additional information including (3) a score for the confidence DeFacto has in the correctness of the input fact. To achieve this goal, DeFacto collects and combines evidence from web pages written in several languages. We also provide an extension for finding similar resources obtained from the Linked Data, using the sameas.org service as backend. In addition, DeFacto provides support for facts with a temporal scope, i.e., it can estimate the time frame within which a fact was valid.},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2015/swc_defacto.pdf},
    Keywords = {defacto ngonga esteves aksw 2015 lehmann speck rene},
    Url = {http://jens-lehmann.org/files/2015/swc_defacto.pdf}
    }

  • A. Dimou, D. Kontokostas, M. Freudenberg, R. Verborgh, J. Lehmann, E. Mannens, S. Hellmann, and R. V. de Walle, “Test-driven Assessment of [R2]RML Mappings to Improve Dataset Quality.,” in International Semantic Web Conference (Posters & Demos), 2015.
    [BibTeX] [Download PDF]
    @InProceedings{r2rml_pd,
    Title = {Test-driven Assessment of [R2]RML Mappings to Improve Dataset Quality.},
    Author = {Dimou, Anastasia and Kontokostas, Dimitris and Freudenberg, Markus and Verborgh, Ruben and Lehmann, Jens and Mannens, Erik and Hellmann, Sebastian and de Walle, Rik Van},
    Booktitle = {International Semantic Web Conference (Posters \& Demos)},
    Year = {2015},
    Editor = {Villata, Serena and Pan, Jeff Z. and Dragoni, Mauro},
    Publisher = {CEUR-WS.org},
    Series = {CEUR Workshop Proceedings},
    Volume = {1486},
    Added-at = {2015-12-22T00:00:00.000+0100},
    Biburl = {https://www.bibsonomy.org/bibtex/2c6871e01a755c64b1df4804a815d7844/dblp},
    Crossref = {conf/semweb/2015p},
    Ee = {http://ceur-ws.org/Vol-1486/paper_108.pdf},
    Interhash = {2bbe4572996c626514c9ad7a1c418140},
    Intrahash = {c6871e01a755c64b1df4804a815d7844},
    Keywords = {2015 lehmann group_aksw MOLE hellmann},
    Timestamp = {2016-05-31T12:33:07.000+0200},
    Url = {http://dblp.uni-trier.de/db/conf/semweb/iswc2015p.html#DimouKFVLMHW15a}
    }

  • M. Hassan, J. Lehmann, and A. N. Ngomo, “Interlinking: Performance Assessment of User Evaluation vs. Supervised Learning Approaches,” in Proceedings of the 8th Workshop on Linked Data on the Web (LDOW2015), Florence, Italy, 2015.
    [BibTeX] [Abstract] [Download PDF]
    Interlinking knowledge bases are widely recognized as an important, but challenging problem. A significant amount of research has been undertaken to provide solutions to this problem with varying degrees of automation and user involvement. In this paper, we present a two-staged experiment for the creation of gold standards that act as benchmarks for several interlinking algorithms. In the first stage the gold standards are generated through manual validation process highlighting the role of users. Using the gold standards obtained from this stage, we assess the performance of human evaluators in addition to supervised interlinking algorithms. We evaluate our approach on several data interlinking tasks with respect to precision, recall and F-measure. Additionally we perform a qualitative analysis on the types of errors made by humans and machines.

    @InProceedings{mofeedHuman15,
    Title = {Interlinking: Performance Assessment of User Evaluation vs. Supervised Learning Approaches},
    Author = {Mofeed Hassan and Jens Lehmann and Axel-C. Ngonga Ngomo},
    Booktitle = {Proceedings of the 8th Workshop on Linked Data on the Web (LDOW2015), Florence, Italy},
    Year = {2015},
    Abstract = {Interlinking knowledge bases are widely recognized as an important, but challenging problem. A significant amount of research has been undertaken to provide solutions to this problem with varying degrees of automation and user involvement. In this paper, we present a two-staged experiment for the creation of gold standards that act as benchmarks for several interlinking algorithms. In the first stage the gold standards are generated through manual validation process highlighting the role of users. Using the gold standards obtained from this stage, we assess the performance of human evaluators in addition to supervised interlinking algorithms. We evaluate our approach on several data interlinking tasks with respect to precision, recall and F-measure. Additionally we perform a qualitative analysis on the types of errors made by humans and machines.},
    Keywords = {2015 group_aksw group_mole MOLE mole hassan lehmann ngonga MOLE sys:relevantFor:geoknow geoknow},
    Owner = {mofeed},
    Timestamp = {2015.07.31},
    Url = {http://svn.aksw.org/papers/2015/LDOW_Human/public.pdf}
    }

  • K. Höffner, M. Martin, and J. Lehmann, “LinkedSpending: OpenSpending becomes Linked Open Data,” Semantic Web Journal, 2015. doi:10.3233/SW-150172
    [BibTeX] [Abstract] [Download PDF]
    There is a high public demand to increase transparency in government spending. Open spending data has the power to reduce corruption by increasing accountability and strengthens democracy because voters can make better informed decisions. An informed and trusting public also strengthens the government itself because it is more likely to commit to large projects. OpenSpending.org is a an open platform that provides public finance data from governments around the world. In this article, we present its RDF conversion LinkedSpending which provides more than five million planned and carried out financial transactions in 627 datasets from all over the world from 2005 to 2035 as Linked Open Data. This data is represented in the RDF Data Cube vocabulary and is freely available and openly licensed.

    @Article{linkedspending,
    Title = {{LinkedSpending}: {OpenSpending} becomes {Linked Open Data}},
    Author = {Konrad H{\"o}ffner and Michael Martin and Jens Lehmann},
    Journal = {Semantic Web Journal},
    Year = {2015},
    Abstract = {There is a high public demand to increase transparency in government spending. Open spending data has the power to reduce corruption by increasing accountability and strengthens democracy because voters can make better informed decisions. An informed and trusting public also strengthens the government itself because it is more likely to commit to large projects. OpenSpending.org is a an open platform that provides public finance data from governments around the world. In this article, we present its RDF conversion LinkedSpending which provides more than five million planned and carried out financial transactions in 627 datasets from all over the world from 2005 to 2035 as Linked Open Data. This data is represented in the RDF Data Cube vocabulary and is freely available and openly licensed.},
    Bdsk-url-1 = {http://www.semantic-web-journal.net/system/files/swj923.pdf},
    Doi = {10.3233/SW-150172},
    Keywords = {semantic web transparency finance budget OpenSpending RDF public expenditure Open Data hoeffner martin lehmann group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis MOLE linkedspending},
    Timestamp = {2015.01.08},
    Url = {http://www.semantic-web-journal.net/system/files/swj923.pdf}
    }

  • A. Both, M. Wauer, A. Garcia-Rojas, D. Hladky, and J. Lehmann, “GeoKnow Generator Workbench — An integrated tool supporting the linked data lifecycle for enterprise usage,” in Proceedings of the 11th International Conference on Semantic Systems Posters and Demos, 2015.
    [BibTeX]
    @InProceedings{geoknow_generator_pd,
    Title = {GeoKnow Generator Workbench -- An integrated tool supporting the linked data lifecycle for enterprise usage},
    Author = {Andreas Both and Matthias Wauer and Alejandra Garcia-Rojas and Daniel Hladky and Jens Lehmann},
    Booktitle = {Proceedings of the 11th International Conference on Semantic Systems Posters and Demos},
    Year = {2015},
    Month = sep,
    Publisher = {ACM},
    Series = {SEM '15},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow 2015 lehmann mole wauer},
    Location = {Vienna, Austria}
    }

  • D. Esteves, D. Moussallem, C. B. Neto, J. Lehmann, M. C. Cavalcanti, and J. C. Duarte, “Interoperable Machine Learning Metadata using MEX.,” in 14th International Semantic Web Conference (ISWC 2015), 11-15 October 2015, Bethlehem, Pennsylvania, USA (Posters & Demos), 2015.
    [BibTeX] [Download PDF]
    @InProceedings{estevesMNLCD15,
    Title = {Interoperable {M}achine {L}earning {M}etadata using {MEX}.},
    Author = {Diego Esteves and Diego Moussallem and Ciro Baron Neto and Jens Lehmann and Maria Claudia Cavalcanti and Julio Cesar Duarte},
    Booktitle = {14th International Semantic Web Conference (ISWC 2015), 11-15 October 2015, Bethlehem, Pennsylvania, USA (Posters \& Demos)},
    Year = {2015},
    Editor = {Serena Villata and Jeff Z. Pan and Mauro Dragoni},
    Publisher = {CEUR-WS.org},
    Series = {CEUR Workshop Proceedings},
    Volume = {1486},
    Bdsk-url-1 = {http://ceur-ws.org/Vol-1486/paper_102.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/291927b04e3cd969e894a6c93fd05af57/dblp},
    Crossref = {conf/semweb/2015p},
    Keywords = {mex esteves aksw dblp 2015 baron neto lehmann moussallem},
    Timestamp = {2015-12-24T12:18:02.000+0100},
    Url = {http://dblp.uni-trier.de/db/conf/semweb/iswc2015p.html#EstevesMNLCD15}
    }

  • D. Esteves, D. Moussallem, C. B. Neto, T. Soru, R. Usbeck, M. Ackermann, and J. Lehmann, “MEX Vocabulary: A Lightweight Interchange Format for Machine Learning Experiments,” in 11th International Conference on Semantic Systems (SEMANTiCS 2015), 15-17 September 2015, Vienna, Austria, 2015.
    [BibTeX] [Abstract] [Download PDF]
    Over the last decades many machine learning experiments have been published, giving benefit to the scientific progress. In order to compare machine-learning experiment results with each other and collaborate positively, they need to be performed thoroughly on the same computing environment, using the same sample datasets and algorithm configurations. Besides this, practical experience shows that scientists and engineers tend to have large output data in their experiments, which is both difficult to analyze and archive properly without provenance metadata. However, the Linked Data community still misses a light-weight specification for interchanging machine-learning metadata over different architectures to achieve a higher level of interoperability. In this paper, we address this gap by presenting a novel vocabulary dubbed MEX. We show that MEX provides a prompt method to describe experiments with a special focus on data provenance and fulfills the requirements for a long-term maintenance.

    @InProceedings{estevesMEX2015,
    Title = {{MEX} {V}ocabulary: {A} {L}ightweight {I}nterchange {F}ormat for {M}achine {L}earning {E}xperiments},
    Author = {Diego Esteves and Diego Moussallem and Ciro Baron Neto and Tommaso Soru and Ricardo Usbeck and Markus Ackermann and Jens Lehmann},
    Booktitle = {11th International Conference on Semantic Systems (SEMANTiCS 2015), 15-17 September 2015, Vienna, Austria},
    Year = {2015},
    Abstract = {Over the last decades many machine learning experiments have been published, giving benefit to the scientific progress. In order to compare machine-learning experiment results with each other and collaborate positively, they need to be performed thoroughly on the same computing environment, using the same sample datasets and algorithm configurations. Besides this, practical experience shows that scientists and engineers tend to have large output data in their experiments, which is both difficult to analyze and archive properly without provenance metadata. However, the Linked Data community still misses a light-weight specification for interchanging machine-learning metadata over different architectures to achieve a higher level of interoperability. In this paper, we address this gap by presenting a novel vocabulary dubbed MEX. We show that MEX provides a prompt method to describe experiments with a special focus on data provenance and fulfills the requirements for a long-term maintenance.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2015/SEMANTICS_MEX/public.pdf},
    Keywords = {mex simba 2015 sys:relevantFor:infai sys:relevantFor:bis aligned esteves baron usbeck group_aksw lehmann mole soru neto ackermann mack moussallem MOLE aligned-project},
    Url = {http://svn.aksw.org/papers/2015/SEMANTICS_MEX/public.pdf}
    }

  • I. Ermilov, K. Höffner, J. Lehmann, and D. Mouromtsev, “kOre: Using Linked Data for OpenScience Information Integration,” in SEMANTiCS 2015, 2015.
    [BibTeX] [Abstract] [Download PDF]
    While the amount of data on the Web grows at 57% per year, the Web of Science maintains a considerable amount of inertia, as yearly growth varies between 1.6% and 14%. On the other hand, the Web of Science consists of high quality information created and reviewed by the international community of researchers. While it is a complicated process to switch from traditional publishing methods to methods, which enable data publishing in machine-readable formats, the situation can be improved by at least exposing metadata about the scientific publications in machine-readable format. In this paper we aim at metadata, hidden inside universities’ internal databases, reports and other hard to discover sources. We extend the VIVO ontology and create the VIVO+ ontology. We define and describe a framework for automatic conversion of university data to RDF. We showcase the VIVO+ ontology and the framework using the example of the ITMO university.

    @InProceedings{ermilov-i-2015-b,
    Title = {k{O}re: {U}sing {L}inked {D}ata for {O}pen{S}cience {I}nformation {I}ntegration},
    Author = {Ivan Ermilov and Konrad H\"offner and Jens Lehmann and Dmitry Mouromtsev},
    Booktitle = {SEMANTiCS 2015},
    Year = {2015},
    Abstract = {While the amount of data on the Web grows at 57% per year, the Web of Science maintains a considerable amount of inertia, as yearly growth varies between 1.6% and 14%. On the other hand, the Web of Science consists of high quality information created and reviewed by the international community of researchers. While it is a complicated process to switch from traditional publishing methods to methods, which enable data publishing in machine-readable formats, the situation can be improved by at least exposing metadata about the scientific publications in machine-readable format. In this paper we aim at metadata, hidden inside universities' internal databases, reports and other hard to discover sources. We extend the VIVO ontology and create the VIVO+ ontology. We define and describe a framework for automatic conversion of university data to RDF. We showcase the VIVO+ ontology and the framework using the example of the ITMO university.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2015/SEMANTICS_Licensing/public.pdf},
    Keywords = {2015 geoknow sake iermilov lehmann hoeffner simba group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow},
    Owner = {ivan},
    Timestamp = {2015.08.12},
    Url = {http://svn.aksw.org/papers/2015/SEMANTICS_ITMOLOD_DEMO/public.pdf}
    }

  • D. Gerber, D. Esteves, J. Lehmann, L. Bühmann, R. Usbeck, A. Ngonga Ngomo, and R. Speck, “DeFacto – Temporal and Multilingual Deep Fact Validation,” Web Semantics: Science, Services and Agents on the World Wide Web, 2015.
    [BibTeX] [Abstract] [Download PDF]
    One of the main tasks when creating and maintaining knowledge bases is to validate facts and provide sources for them in order to ensure correctness and traceability of the provided knowledge. So far, this task is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents.In this article, we present DeFacto (Deep Fact Validation) – an algorithm able to validate facts by finding trustworthy sources for them on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of web pages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. To achieve this goal, DeFacto collects and combines evidence from web pages written in several languages. In addition, DeFacto provides support for facts with a temporal scope, i.e., it can estimate in which time frame a fact was valid. Given that the automatic evaluation of facts has not been paid much attention to so far, generic benchmarks for evaluating these frameworks were not previously available. We thus also present a generic evaluation framework for fact checking and make it publicly available.

    @Article{gerber2015,
    Title = {De{F}acto - {T}emporal and {M}ultilingual {D}eep {F}act {V}alidation},
    Author = {Daniel Gerber and Diego Esteves and Jens Lehmann and Lorenz B{\"u}hmann and Ricardo Usbeck and Axel-Cyrille {Ngonga Ngomo} and Ren{\'e} Speck},
    Journal = {Web Semantics: Science, Services and Agents on the World Wide Web},
    Year = {2015},
    Abstract = {One of the main tasks when creating and maintaining knowledge bases is to validate facts and provide sources for them in order to ensure correctness and traceability of the provided knowledge. So far, this task is often addressed by human curators in a three-step process: issuing appropriate keyword queries for the statement to check using standard search engines, retrieving potentially relevant documents and screening those documents for relevant content. The drawbacks of this process are manifold. Most importantly, it is very time-consuming as the experts have to carry out several search processes and must often read several documents.In this article, we present DeFacto (Deep Fact Validation) - an algorithm able to validate facts by finding trustworthy sources for them on the Web. DeFacto aims to provide an effective way of validating facts by supplying the user with relevant excerpts of web pages as well as useful additional information including a score for the confidence DeFacto has in the correctness of the input fact. To achieve this goal, DeFacto collects and combines evidence from web pages written in several languages. In addition, DeFacto provides support for facts with a temporal scope, i.e., it can estimate in which time frame a fact was valid. Given that the automatic evaluation of facts has not been paid much attention to so far, generic benchmarks for evaluating these frameworks were not previously available. We thus also present a generic evaluation framework for fact checking and make it publicly available.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2015/JWS_DeFacto/public.pdf},
    Keywords = {2015 group_aksw simba diesel defacto lehmann esteves gerber usbeck speck ngonga geoknow buehmann},
    Url = {http://svn.aksw.org/papers/2015/JWS_DeFacto/public.pdf}
    }

  • A. Garcia-Rojas, D. Hladky, M. Wauer, R. Isele, C. Stadler, and J. Lehmann, “The GeoKnow Generator Workbench: An Integration Platform for Geospatial Data,” in Proceedings of the 3rd International Workshop on Semantic Web Enterprise Adoption and Best Practice, 2015.
    [BibTeX]
    @InProceedings{wasabi_generator,
    Title = {The GeoKnow Generator Workbench: An Integration Platform for Geospatial Data},
    Author = {Alejandra Garcia-Rojas and Daniel Hladky and Matthias Wauer and Robert Isele and Claus Stadler and Jens Lehmann},
    Booktitle = {Proceedings of the 3rd International Workshop on Semantic Web Enterprise Adoption and Best Practice},
    Year = {2015},
    Keywords = {2015 ontology group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow topic_EvolutionRepair lehmann geoknow wauer stadler}
    }

  • C. Stadler, N. Arndt, M. Martin, and J. Lehmann, “RDF Editing on the Web with REX,” in SEMANTICS 2015, 2015.
    [BibTeX] [Abstract] [Download PDF]
    While several tools for simplifying the task of visualizing (SPARQL accessible) RDF data on the Web are available today, there is a lack of corresponding tools for exploiting standard HTML forms directly for RDF editing. The few related existing systems roughly fall in the categories of (a) applications that are not aimed at being reused as components, (b) form generators, which automatically create forms from a given schema — possibly derived from instance data — or (c) form template processors which create forms from a manually created specification. Furthermore, these systems usually come with their own widget library, which can only be extended by wrapping existing widgets. In this paper, we present the AngularJS-based \emph{Rdf Edit eXtension} (REX) system, which facilitates the enhancement of standard HTML forms as well as many existing AngularJS widgets with RDF editing support by means of a set of HTML attributes. We demonstrate our system though the realization of several usage scenarios.

    @InProceedings{rex_pd,
    Title = {RDF Editing on the Web with REX},
    Author = {Claus Stadler and Natanael Arndt and Michael Martin and Jens Lehmann},
    Booktitle = {SEMANTICS 2015},
    Year = {2015},
    Month = sep,
    Publisher = {ACM},
    Series = {SEM '15},
    Abstract = {While several tools for simplifying the task of visualizing (SPARQL accessible) RDF data on the Web are available today, there is a lack of corresponding tools for exploiting standard HTML forms directly for RDF editing. The few related existing systems roughly fall in the categories of (a) applications that are not aimed at being reused as components, (b) form generators, which automatically create forms from a given schema -- possibly derived from instance data -- or (c) form template processors which create forms from a manually created specification. Furthermore, these systems usually come with their own widget library, which can only be extended by wrapping existing widgets. In this paper, we present the AngularJS-based \emph{Rdf Edit eXtension} (REX) system, which facilitates the enhancement of standard HTML forms as well as many existing AngularJS widgets with RDF editing support by means of a set of HTML attributes. We demonstrate our system though the realization of several usage scenarios.},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow 2015 lehmann mole stadler arndt martin es},
    Location = {Vienna, Austria},
    Url = {http://jens-lehmann.org/files/2015/semantics_pd_rex.pdf}
    }

  • M. Fossati, D. Kontokostas, and J. Lehmann, “Unsupervised Learning of an Extensive and Usable Taxonomy for DBpedia,” in Proceedings of the 11th International Conference on Semantic Systems, 2015.
    [BibTeX]
    @InProceedings{dbtax,
    Title = {Unsupervised Learning of an Extensive and Usable Taxonomy for DBpedia},
    Author = {Fossati, Marco and Kontokostas, Dimitris and Lehmann, Jens},
    Booktitle = {Proceedings of the 11th International Conference on Semantic Systems},
    Year = {2015},
    Month = sep,
    Publisher = {ACM},
    Series = {SEM '15},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:aligned sys:relevantFor:aligned 2015 lehmann hellmann kilt kontokostas aligned-project mole},
    Location = {Vienna, Austria}
    }

  • A. Dimou, D. Kontokostas, M. Freudenberg, R. Verborgh, J. Lehmann, E. Mannens, S. Hellmann, and R. Van de Walle, “Assessing and Refining Mappings to RDF to Improve Dataset Quality,” in Proceedings of the 14th International Semantic Web Conference, 2015.
    [BibTeX]
    @InProceedings{iswc15_rml_rdfunit,
    Title = {Assessing and {R}efining {M}appings to {RDF} to {I}mprove {D}ataset {Q}uality},
    Author = {Dimou, Anastasia and Kontokostas, Dimitris and Freudenberg, Markus and Verborgh, Ruben and Lehmann, Jens and Mannens, Erik and Hellmann, Sebastian and Van de Walle, Rik},
    Booktitle = {Proceedings of the 14th {I}nternational {S}emantic {W}eb {C}onference},
    Year = {2015},
    Month = oct,
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:aligned 2015 lehmann hellmann kilt kontokostas rdfunit aligned-project mole}
    }

  • G. Vaidya, D. Kontokostas, M. Knuth, J. Lehmann, and S. Hellmann, “DBpedia Commons: Structured Multimedia Metadata from the Wikimedia Commons,” in Proceedings of the 14th International Semantic Web Conference, 2015.
    [BibTeX] [Download PDF]
    @InProceedings{dbpedia_commons,
    Title = {{DB}pedia {C}ommons: {S}tructured {M}ultimedia {M}etadata from the {W}ikimedia {C}ommons},
    Author = {Vaidya, Gaurav and Kontokostas, Dimitris and Knuth, Magnus and Lehmann, Jens and Hellmann, Sebastian},
    Booktitle = {Proceedings of the 14th {I}nternational {S}emantic {W}eb {C}onference},
    Year = {2015},
    Month = oct,
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:aligned 2015 lehmann hellmann kilt kontokostas aligned-project mole sys:relevantFor:geoknow geoknow},
    Url = {http://svn.aksw.org/papers/2015/ISWCData_DBpediaCommons/public.pdf}
    }

2014

  • J. Lehmann and J. Voelker, “An Introduction to Ontology Learning,” in Perspectives on Ontology Learning, J. Lehmann and J. Voelker, Eds., AKA / IOS Press, 2014, p. ix-xvi.
    [BibTeX] [Download PDF]
    @InCollection{pol_introduction,
    Title = {An Introduction to Ontology Learning},
    Author = {Jens Lehmann and Johanna Voelker},
    Booktitle = {Perspectives on Ontology Learning},
    Publisher = {AKA / IOS Press},
    Year = {2014},
    Editor = {Jens Lehmann and Johanna Voelker},
    Pages = {ix-xvi},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/pol_introduction.pdf},
    Keywords = {2014 group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis ys:relevantFor:gold gold lehmann},
    Owner = {jl},
    Timestamp = {2014.04.12},
    Url = {http://jens-lehmann.org/files/2014/pol_introduction.pdf}
    }

  • D. Lukovnikov, C. Stadler, D. Kontokostas, S. Hellmann, and J. Lehmann, “DBpedia Viewer – An Integrative Interface for DBpedia leveraging the DBpedia Service Eco System,” in Proc. of the Linked Data on the Web 2014 Workshop, 2014.
    [BibTeX] [Download PDF]
    @InProceedings{ldow_dbpedia_viewer,
    Title = {DBpedia Viewer - An Integrative Interface for DBpedia leveraging the DBpedia Service Eco System},
    Author = {Denis Lukovnikov and Claus Stadler and Dimitris Kontokostas and Sebastian Hellmann and Jens Lehmann},
    Booktitle = {Proc. of the Linked Data on the Web 2014 Workshop},
    Year = {2014},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/ldow_dbpedia_viewer.pdf},
    Keywords = {2014 group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow topic_Exploration lehmann hellmann kilt kontokostas stadler},
    Owner = {jl},
    Timestamp = {2014.04.12},
    Url = {http://jens-lehmann.org/files/2014/ldow_dbpedia_viewer.pdf}
    }

  • J. Lehmann and A. N. Ngomo, The GeoKnow Project, 2014.
    [BibTeX]
    @Proceedings{eswc_networking_geoknow,
    Title = {The {GeoKnow} Project},
    Year = {2014},
    Author = {Jens Lehmann and Axel-Cyrille Ngonga Ngomo},
    Booktitle = {ESWC EU Project Networking Track},
    Keywords = {2014 ngonga lehmann group_aksw group_mole sys:relevantFor:infai sys:relevantFor:geoknow MOLE}
    }

  • J. Lehmann, N. Fanizzi, L. Bühmann, and C. d’Amato, “Concept Learning,” in Perspectives on Ontology Learning, J. Lehmann and J. Voelker, Eds., AKA / IOS Press, 2014, pp. 71-91.
    [BibTeX] [Download PDF]
    @InCollection{pol_concept_learning,
    Title = {Concept Learning},
    Author = {Jens Lehmann and Nicola Fanizzi and Lorenz B{\"u}hmann and Claudia d'Amato},
    Booktitle = {Perspectives on Ontology Learning},
    Publisher = {AKA / IOS Press},
    Year = {2014},
    Editor = {Jens Lehmann and Johanna Voelker},
    Pages = {71-91},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/pol_concept_learning.pdf},
    Keywords = {2014 group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis ys:relevantFor:gold gold lehmann buehmann},
    Owner = {jl},
    Timestamp = {2014.04.12},
    Url = {http://jens-lehmann.org/files/2014/pol_concept_learning.pdf}
    }

  • J. Lehmann and L. Bühmann, “Linked Data Reasoning,” in Linked Enterprise Data, X.media press, 2014.
    [BibTeX]
    @InCollection{led_reasoning,
    Title = {Linked Data Reasoning},
    Author = {Jens Lehmann and Lorenz B{\"u}hmann},
    Booktitle = {Linked Enterprise Data},
    Publisher = {X.media press},
    Year = {2014},
    Keywords = {2014 lehmann group_aksw group_mole sys:relevantFor:imole MOLE buehmann}
    }

  • D. Lukovnikov, C. Stadler, and J. Lehmann, “LD viewer-linked data presentation framework,” in Proceedings of the 10th International Conference on Semantic Systems, 2014, pp. 124-131.
    [BibTeX]
    @InProceedings{ld_viewer,
    Title = {LD viewer-linked data presentation framework},
    Author = {Lukovnikov, Denis and Stadler, Claus and Lehmann, Jens},
    Booktitle = {Proceedings of the 10th International Conference on Semantic Systems},
    Year = {2014},
    Organization = {ACM},
    Pages = {124--131},
    Keywords = {2014 group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow topic_Exploration lehmann stadler}
    }

  • E. Marx, R. Usbeck, A. {Ngonga Ngomo}, K. Höffner, J. Lehmann, and S. Auer, “Towards an Open Question Answering Architecture,” in Proceedings of the 10th International Conference on Semantic Systems, New York, NY, USA, 2014, pp. 57-60. doi:10.1145/2660517.2660519
    [BibTeX] [Download PDF]
    @InProceedings{marx/openqa/semantics/2014,
    Title = {Towards an {O}pen {Q}uestion {A}nswering {A}rchitecture},
    Author = {Marx, Edgard and Usbeck, Ricardo and {Ngonga Ngomo}, Axel-Cyrille and H\"{o}ffner, Konrad and Lehmann, Jens and Auer, S\"{o}ren},
    Booktitle = {Proceedings of the 10th International Conference on Semantic Systems},
    Year = {2014},
    Address = {New York, NY, USA},
    Pages = {57--60},
    Publisher = {ACM},
    Series = {SEMANTICS'14},
    Acmid = {2660519},
    Doi = {10.1145/2660517.2660519},
    ISBN = {978-1-4503-2927-9},
    Keywords = {sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow marx smart simba ngonga usbeck group_aksw hoeffner lehmann openqa mole MOLE},
    Location = {Leipzig, Germany},
    Numpages = {4},
    Url = {http://doi.acm.org/10.1145/2660517.2660519}
    }

  • M. A. Sherif, S. Coelho, R. Usbeck, S. Hellmann, J. Lehmann, M. Brümmer, and A. Both, “NIF4OGGD – NLP Interchange Format for Open German Governmental Data,” in The 9th edition of the Language Resources and Evaluation Conference, 26-31 May, Reykjavik, Iceland, 2014.
    [BibTeX] [Abstract] [Download PDF]
    In the last couple of years the amount of structured open government data has increased significantly. Already now, citizens are able to leverage the advantages of open data through increased transparency and better opportunities to take part in governmental decision making processes. Our approach increases the interoperability of existing but distributed open governmental datasets by converting them to the RDF-based NLP Interchange Format (NIF). Furthermore, we integrate the converted data into a geodata store and present a user interface for querying this data via a keyword-based search. The language resource generated in this project is publicly available for download and via a dedicated SPARQL endpoint.

    @InProceedings{NIF4OGGD,
    Title = {NIF4OGGD - NLP Interchange Format for Open German Governmental Data},
    Author = {Sherif, Mohamed A. and Coelho, Sandro and Usbeck, Ricardo and Hellmann, Sebastian and Lehmann, Jens and Br{\"u}mmer, Martin and Both, Andreas},
    Booktitle = {The 9th edition of the Language Resources and Evaluation Conference, 26-31 May, Reykjavik, Iceland},
    Year = {2014},
    Abstract = {In the last couple of years the amount of structured open government data has increased significantly. Already now, citizens are able to leverage the advantages of open data through increased transparency and better opportunities to take part in governmental decision making processes. Our approach increases the interoperability of existing but distributed open governmental datasets by converting them to the RDF-based NLP Interchange Format (NIF). Furthermore, we integrate the converted data into a geodata store and present a user interface for querying this data via a keyword-based search. The language resource generated in this project is publicly available for download and via a dedicated SPARQL endpoint.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2014/LREC_NIF4OGGD/public.pdf},
    Keywords = {sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow sherif hellmann kilt lehmann usbeck bruemmer nif4oggd group_aksw kilt Lidmole MOLE simba},
    Url = {http://svn.aksw.org/papers/2014/LREC_NIF4OGGD/public.pdf}
    }

  • P. Westphal, C. Stadler, and J. Lehmann, “Quality Assurance of RDB2RDF Mappings,” 2014.
    [BibTeX] [Download PDF]
    @TechReport{rdb2rdf_qa,
    Title = {Quality Assurance of RDB2RDF Mappings},
    Author = {Patrick Westphal and Claus Stadler and Jens Lehmann},
    Year = {2014},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2014/report_QA_RDB2RDF/public.pdf},
    Institute = {University of Leipzig},
    Keywords = {2014 group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis lehmann westphal stadler},
    Url = {http://svn.aksw.org/papers/2014/report_QA_RDB2RDF/public.pdf}
    }

  • A. Rula, M. Palmonari, A. N. Ngomo, D. Gerber, J. Lehmann, and L. Bühmann, “Hybrid Acquisition of Temporal Scopes for RDF Data,” in Proc. of the Extended Semantic Web Conference 2014, 2014.
    [BibTeX] [Download PDF]
    @InProceedings{eswc_temporal_scopes,
    Title = {Hybrid Acquisition of Temporal Scopes for {RDF} Data},
    Author = {Anisa Rula and Matteo Palmonari and Axel-Cyrille Ngonga Ngomo and Daniel Gerber and Jens Lehmann and Lorenz B{\"u}hmann},
    Booktitle = {Proc. of the Extended Semantic Web Conference 2014},
    Year = {2014},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/eswc_temporal_scoping.pdf},
    Keywords = {group_aksw MOLE SIMBA sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow lod2page 2014 lehmann ngonga gerber buehmann},
    Owner = {jl},
    Timestamp = {2014.04.12},
    Url = {http://jens-lehmann.org/files/2014/eswc_temporal_scoping.pdf}
    }

  • S. Pokharel, M. A. Sherif, and J. Lehmann, “Ontology Based Data Access and Integration for Improving the Effectiveness of Farming in Nepal,” in Proc. of the International Conference on Web Intelligence, 2014.
    [BibTeX] [Abstract] [Download PDF]
    It is widely accepted that food supply and quality are major problems in the 21st century. Due to the growth of the world’s population, there is a pressing need to improve the productivity of agricultural crops, which hinges on different factors such as geographical location, soil type, weather condition and particular attributes of the crops to plant. In many regions of the world, information about those factors is not readily accessible and dispersed across a multitude of different sources. One of those regions is Nepal, in which the lack of access to this knowledge poses a significant burden for agricultural planning and decision making. Making such knowledge more accessible can boot up a farmer’s living standard and increase their competitiveness on national and global markets. In this article, we show how we converted several available, although not easily accessible, datasets to RDF, thereby lowering the barrier for data re-usage and integration. We describe the conversion, linking, and publication process as well as use cases, which can be implemented using the farming datasets in Nepal.

    @InProceedings{wi_farming_nepal,
    Title = {Ontology Based Data Access and Integration for Improving the Effectiveness of Farming in Nepal},
    Author = {Suresh Pokharel and Mohamed Ahmed Sherif and Jens Lehmann},
    Booktitle = {Proc. of the International Conference on Web Intelligence},
    Year = {2014},
    Abstract = {It is widely accepted that food supply and quality are major problems in the 21st century. Due to the growth of the world's population, there is a pressing need to improve the productivity of agricultural crops, which hinges on different factors such as geographical location, soil type, weather condition and particular attributes of the crops to plant. In many regions of the world, information about those factors is not readily accessible and dispersed across a multitude of different sources. One of those regions is Nepal, in which the lack of access to this knowledge poses a significant burden for agricultural planning and decision making. Making such knowledge more accessible can boot up a farmer's living standard and increase their competitiveness on national and global markets. In this article, we show how we converted several available, although not easily accessible, datasets to RDF, thereby lowering the barrier for data re-usage and integration. We describe the conversion, linking, and publication process as well as use cases, which can be implemented using the farming datasets in Nepal.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2014/WI2014_agriNepalData/public.pdf},
    Keywords = {group_aksw MOLE 2014 sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow topic_geospatial lehmann sherif},
    Url = {http://svn.aksw.org/papers/2014/WI2014_agriNepalData/public.pdf}
    }

  • A. Ngonga Ngomo, S. Auer, J. Lehmann, and A. Zaveri, “Introduction to Linked Data and Its Lifecycle on the Web,” in Reasoning Web, 2014.
    [BibTeX]
    @InProceedings{rw2014,
    Title = {Introduction to Linked Data and Its Lifecycle on the Web},
    Author = {Axel-Cyrille {Ngonga Ngomo} and S{\"o}ren Auer and Jens Lehmann and Amrapali Zaveri},
    Booktitle = {Reasoning Web},
    Year = {2014},
    Date-modified = {2015-02-06 06:56:48 +0000},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow topic_GeoSemWeb SIMBA zaveri ngonga auer lehmann MOLE simba ngonga 2014 limes dataquality},
    Owner = {ngonga},
    Timestamp = {2015.01.28}
    }

  • D. Kontokostas, P. Westphal, S. Auer, S. Hellmann, J. Lehmann, R. Cornelissen, and A. Zaveri, “Test-driven Evaluation of Linked Data Quality,” in Proceedings of the 23rd International Conference on World Wide Web, 2014, pp. 747-758. doi:10.1145/2566486.2568002
    [BibTeX] [Abstract] [Download PDF]
    Linked Open Data (LOD) comprises of an unprecedented volume of structured data on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowd-sourced or extracted data of often relatively low quality. We present a methodology for test-driven quality assessment of Linked Data, which is inspired by test-driven software development. We argue, that vocabularies, ontologies and knowledge bases should be accompanied by a number of test cases, which help to ensure a basic level of quality. We present a methodology for assessing the quality of linked data resources, based on a formalization of bad smells and data quality problems. Our formalization employs SPARQL query templates, which are instantiated into concrete quality test case queries. Based on an extensive survey, we compile a comprehensive library of data quality test case patterns. We perform automatic test case instantiation based on schema constraints or semi-automatically enriched schemata and allow the user to generate specific test case instantiations that are applicable to a schema or dataset. We provide an extensive evaluation of five LOD datasets, manual test case instantiation for five schemas and automatic test case instantiations for all available schemata registered with LOV. One of the main advantages of our approach is that domain specific semantics can be encoded in the data quality test cases, thus being able to discover data quality problems beyond conventional quality heuristics.

    @InProceedings{kontokostasDatabugger,
    Title = {Test-driven Evaluation of Linked Data Quality},
    Author = {Kontokostas, Dimitris and Westphal, Patrick and Auer, S\"{o}ren and Hellmann, Sebastian and Lehmann, Jens and Cornelissen, Roland and Zaveri, Amrapali},
    Booktitle = {Proceedings of the 23rd International Conference on World Wide Web},
    Year = {2014},
    Pages = {747--758},
    Publisher = {International World Wide Web Conferences Steering Committee},
    Series = {WWW '14},
    Abstract = {Linked Open Data (LOD) comprises of an unprecedented volume of structured data on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowd-sourced or extracted data of often relatively low quality. We present a methodology for test-driven quality assessment of Linked Data, which is inspired by test-driven software development. We argue, that vocabularies, ontologies and knowledge bases should be accompanied by a number of test cases, which help to ensure a basic level of quality. We present a methodology for assessing the quality of linked data resources, based on a formalization of bad smells and data quality problems. Our formalization employs SPARQL query templates, which are instantiated into concrete quality test case queries. Based on an extensive survey, we compile a comprehensive library of data quality test case patterns. We perform automatic test case instantiation based on schema constraints or semi-automatically enriched schemata and allow the user to generate specific test case instantiations that are applicable to a schema or dataset. We provide an extensive evaluation of five LOD datasets, manual test case instantiation for five schemas and automatic test case instantiations for all available schemata registered with LOV. One of the main advantages of our approach is that domain specific semantics can be encoded in the data quality test cases, thus being able to discover data quality problems beyond conventional quality heuristics.},
    Acmid = {2568002},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2014/WWW_Databugger/public.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1145/2566486.2568002},
    Date-modified = {2015-02-06 06:56:57 +0000},
    Doi = {10.1145/2566486.2568002},
    ISBN = {978-1-4503-2744-2},
    Keywords = {2014 group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow topic_QualityAnalysis lod2page lehmann kontokostas rdfunit dataquality westphal},
    Location = {Seoul, Korea},
    Numpages = {12},
    Timestamp = {2014.01.23},
    Url = {http://svn.aksw.org/papers/2014/WWW_Databugger/public.pdf}
    }

  • D. Kontokostas, P. Westphal, S. Auer, S. Hellmann, J. Lehmann, and R. Cornelissen, “Databugger: A Test-driven Framework for Debugging the Web of Data,” in Proceedings of the Companion Publication of the 23rd International Conference on World Wide Web Companion, 2014, pp. 115-118. doi:10.1145/2567948.2577017
    [BibTeX] [Abstract] [Download PDF]
    Linked Open Data (LOD) comprises of an unprecedented volume of structured data on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowd-sourced or extracted data of often relatively low quality. We present Databugger, a framework for test-driven quality assessment of Linked Data, which is inspired by test-driven software development. Databugger ensures a basic level of quality by accompanying vocabularies, ontologies and knowledge bases with a number of test cases. The formalization behind the tool employs SPARQL query templates, which are instantiated into concrete quality test queries. The test queries can be instantiated automatically based on a vocabulary or manually based on the data semantics. One of the main advantages of our approach is that domain specific semantics can be encoded in the data quality test cases, thus being able to discover data quality problems beyond conventional quality heuristics.

    @InProceedings{databugger_demo,
    Title = {Databugger: A Test-driven Framework for Debugging the Web of Data},
    Author = {Kontokostas, Dimitris and Westphal, Patrick and Auer, S\"{o}ren and Hellmann, Sebastian and Lehmann, Jens and Cornelissen, Roland},
    Booktitle = {Proceedings of the Companion Publication of the 23rd International Conference on World Wide Web Companion},
    Year = {2014},
    Pages = {115--118},
    Publisher = {International World Wide Web Conferences Steering Committee},
    Series = {WWW Companion '14},
    Abstract = {Linked Open Data (LOD) comprises of an unprecedented volume of structured data on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowd-sourced or extracted data of often relatively low quality. We present Databugger, a framework for test-driven quality assessment of Linked Data, which is inspired by test-driven software development. Databugger ensures a basic level of quality by accompanying vocabularies, ontologies and knowledge bases with a number of test cases. The formalization behind the tool employs SPARQL query templates, which are instantiated into concrete quality test queries. The test queries can be instantiated automatically based on a vocabulary or manually based on the data semantics. One of the main advantages of our approach is that domain specific semantics can be encoded in the data quality test cases, thus being able to discover data quality problems beyond conventional quality heuristics.},
    Acmid = {2577017},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/www_demo_databugger.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1145/2567948.2577017},
    Doi = {10.1145/2567948.2577017},
    ISBN = {978-1-4503-2745-9},
    Keywords = {2014 group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow topic_EvolutionRepair lod2page lehmann kontokostas rdfunit westphal},
    Location = {Seoul, Korea},
    Numpages = {4},
    Url = {http://jens-lehmann.org/files/2014/www_demo_databugger.pdf}
    }

  • D. Cherix, R. Usbeck, A. Both, and J. Lehmann, “Lessons Learned—the Case of CROCUS: Cluster-based ontology data cleansing,” in ESWC Best of Workshops, 2014.
    [BibTeX] [Download PDF]
    @InProceedings{ll_wasabi_crocus,
    Title = {Lessons Learned---the Case of {CROCUS}: Cluster-based ontology data cleansing},
    Author = {Didier Cherix and Ricardo Usbeck and Andreas Both and Jens Lehmann},
    Booktitle = {ESWC Best of Workshops},
    Year = {2014},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2014/ESWC_Crocus_Best_Of_Workshops/public.pdf},
    Keywords = {sys:relevantFor:infai sys:relevantFor:bis lehmann usbeck group_aksw sys:relevantFor:geoknow MOLE simba},
    Url = {http://svn.aksw.org/papers/2014/ESWC_Crocus_Best_Of_Workshops/public.pdf}
    }

  • D. Cherix, R. Usbeck, A. Both, and J. Lehmann, “CROCUS: Cluster-based ontology data cleansing,” in Proceedings of the 2nd International Workshop on Semantic Web Enterprise Adoption and Best Practice, 2014.
    [BibTeX] [Download PDF]
    @InProceedings{wasabi_crocus,
    Title = {{CROCUS}: Cluster-based ontology data cleansing},
    Author = {Didier Cherix and Ricardo Usbeck and Andreas Both and Jens Lehmann},
    Booktitle = {Proceedings of the 2nd International Workshop on Semantic Web Enterprise Adoption and Best Practice},
    Year = {2014},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/wasabi_crocus.pdf},
    Keywords = {2014 ontology group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow topic_EvolutionRepair lehmann usbeck simba},
    Owner = {jl},
    Timestamp = {2014.04.12},
    Url = {http://jens-lehmann.org/files/2014/wasabi_crocus.pdf}
    }

  • F. Badie, T. Soru, and J. Lehmann, “A Fuzzy Knowledge Representation Model for Student Performance Assessment,” in 2014 IEEE 14th International Conference on Advanced Learning Technologies (ICALT), 2014.
    [BibTeX] [Download PDF]
    @InProceedings{badie14,
    Title = {A Fuzzy Knowledge Representation Model for Student Performance Assessment},
    Author = {Farshad Badie and Tommaso Soru and Jens Lehmann},
    Booktitle = {2014 IEEE 14th International Conference on Advanced Learning Technologies (ICALT)},
    Year = {2014},
    Publisher = {IEEE},
    Bdsk-url-1 = {https://www.researchgate.net/publication/262763990_A_Fuzzy_Knowledge_Representation_Model_for_Student_Performance_Assessment},
    Keywords = {badie soru lehmann group_aksw 2014 MOLE},
    Owner = {tsoru},
    Url = {https://www.researchgate.net/publication/262763990_A_Fuzzy_Knowledge_Representation_Model_for_Student_Performance_Assessment}
    }

  • L. Bühmann, D. Fleischhacker, J. Lehmann, A. Melo, and J. Völker, “Inductive Lexical Learning of Class Expressions,” in Knowledge Engineering and Knowledge Management, 2014, pp. 42-53. doi:10.1007/978-3-319-13704-9_4
    [BibTeX] [Download PDF]
    @InProceedings{Buehmann2014,
    Title = {Inductive Lexical Learning of Class Expressions},
    Author = {B{\"u}hmann, Lorenz and Fleischhacker, Daniel and Lehmann, Jens and Melo, Andre and V{\"o}lker, Johanna},
    Booktitle = {Knowledge Engineering and Knowledge Management},
    Year = {2014},
    Editor = {Janowicz, Krzysztof and Schlobach, Stefan and Lambrix, Patrick and Hyv{\"o}nen, Eero},
    Pages = {42-53},
    Publisher = {Springer International Publishing},
    Series = {Lecture Notes in Computer Science},
    Volume = {8876},
    Bdsk-url-1 = {http://dx.doi.org/10.1007/978-3-319-13704-9_4},
    Doi = {10.1007/978-3-319-13704-9_4},
    ISBN = {978-3-319-13703-2},
    Keywords = {2014 group_aksw event_ekaw group_mole mole buehmann lehmann dllearner ore sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lmol MOLE},
    Language = {English},
    Owner = {lorenz},
    Timestamp = {2014.11.23},
    Url = {http://dx.doi.org/10.1007/978-3-319-13704-9_4}
    }

  • S. Athanasiou, D. Hladky, G. Giannopoulos, A. García-Rojas, and J. Lehmann, “GeoKnow: Making the Web an Exploratory Place for Geospatial Knowledge,” ERCIM News, vol. 2014, iss. 96, 2014.
    [BibTeX] [Download PDF]
    @Article{geoknow_ercim,
    Title = {{GeoKnow}: Making the Web an Exploratory Place for Geospatial Knowledge},
    Author = {Spiros Athanasiou and Daniel Hladky and Giorgos Giannopoulos and Alejandra Garc\'{\i}a-Rojas and Jens Lehmann},
    Journal = {ERCIM News},
    Year = {2014},
    Number = {96},
    Volume = {2014},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/ercim_geoknow.pdf},
    Bibsource = {DBLP, http://dblp.uni-trier.de},
    Ee = {http://ercim-news.ercim.eu/en96/special/geoknow-making-the-web-an-exploratory-place-for-geospatial-knowledge},
    Keywords = {2014 group_aksw dllearner geoknow topic_geospatial MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow lehmann},
    Url = {http://jens-lehmann.org/files/2014/ercim_geoknow.pdf}
    }

  • C. Dirschl, K. Eck, and J. Lehmann, “Supporting the Data Lifecycle at a Global Publisher using the Linked Data Stack,” ERCIM News, vol. 2014, iss. 96, 2014.
    [BibTeX] [Download PDF]
    @Article{wkd_ercim,
    Title = {Supporting the Data Lifecycle at a Global Publisher using the Linked Data Stack},
    Author = {Christian Dirschl and Katja Eck and Jens Lehmann},
    Journal = {ERCIM News},
    Year = {2014},
    Number = {96},
    Volume = {2014},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/ercim_wkd.pdf},
    Bibsource = {DBLP, http://dblp.uni-trier.de},
    Ee = {http://ercim-news.ercim.eu/en96/special/supporting-the-data-lifecycle-at-a-global-publisher-using-the-linked-data-stack},
    Keywords = {2014 group_aksw dllearner MOLE interlinking, fusion, enrichment, quality analysis, querying, authoring, lod, rdf, semanticweb, sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow topic_Lifecycle lod2page lehmann MOLE},
    Url = {http://jens-lehmann.org/files/2014/ercim_wkd.pdf}
    }

  • J. J. L. Grange, J. Lehmann, S. Athanasiou, A. G. Rojas, G. Giannopoulos, D. Hladky, R. Isele, A. Ngonga Ngomo, M. A. Sherif, C. Stadler, and M. Wauer, “The GeoKnow Generator: Managing Geospatial Data in the Linked Data Web,” in Proceedings of the Linking Geospatial Data Workshop, 2014.
    [BibTeX] [Download PDF]
    @InProceedings{lgd_geoknow_generator,
    Title = {The GeoKnow Generator: Managing Geospatial Data in the Linked Data Web},
    Author = {Jon Jay Le Grange and Jens Lehmann and Spiros Athanasiou and Alejandra Garcia Rojas and Giorgos Giannopoulos and Daniel Hladky and Robert Isele and Axel-Cyrille {Ngonga Ngomo} and Mohamed Ahmed Sherif and Claus Stadler and Matthias Wauer},
    Booktitle = {Proceedings of the Linking Geospatial Data Workshop},
    Year = {2014},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/lgd_geoknow_generator.pdf},
    Keywords = {2014 group_aksw group_mole mole ngonga lehmann sherif topic_Lifecycle sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow geoknow lod lod2page peer-reviewed MOLE simba wauer stadler},
    Owner = {jl},
    Timestamp = {2014.04.12},
    Url = {http://jens-lehmann.org/files/2014/lgd_geoknow_generator.pdf}
    }

  • D. Kontokostas, M. Brümmer, S. Hellmann, J. Lehmann, and L. Ioannidis, “NLP data cleansing based on Linguistic Ontology constraints,” in Proc. of the Extended Semantic Web Conference 2014, 2014.
    [BibTeX] [Download PDF]
    @InProceedings{eswc_rdfunit_nlp,
    Title = {NLP data cleansing based on Linguistic Ontology constraints},
    Author = {Dimitris Kontokostas and Martin Br{\"u}mmer and Sebastian Hellmann and Jens Lehmann and Lazaros Ioannidis},
    Booktitle = {Proc. of the Extended Semantic Web Conference 2014},
    Year = {2014},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/eswc_rdfunit_nlp.pdf},
    Keywords = {group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow topic_EvolutionRepair lod2page 2014 lehmann hellmann kilt kontokostas rdfunit Lidmole mole},
    Owner = {jl},
    Timestamp = {2014.04.12},
    Url = {http://jens-lehmann.org/files/2014/eswc_rdfunit_nlp.pdf}
    }

  • P. Hitzler, J. Lehmann, and A. Polleres, “Logics for the Semantic Web,” in Logic and Computation, Elesevier, 2014, vol. 9.
    [BibTeX] [Download PDF]
    @InCollection{hhl_logics_semantic_web,
    Title = {Logics for the Semantic Web},
    Author = {Pascal Hitzler and Jens Lehmann and Axel Polleres},
    Booktitle = {Logic and Computation},
    Publisher = {Elesevier},
    Year = {2014},
    Series = {Handbook of the History of Logic},
    Volume = {9},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2014/hhl_logics_semantic_web.pdf},
    Keywords = {2014 MOLE group_aksw lehmann sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 topic_Classification sys:relevantFor:geoknow lod2page peer-reviewed},
    Owner = {jl},
    Timestamp = {2014.04.12},
    Url = {http://jens-lehmann.org/files/2014/hhl_logics_semantic_web.pdf}
    }

  • S. Hellmann, V. Bryl, L. Bühmann, M. Dojchinovski, D. Kontokostas, J. Lehmann, U. Milošević, P. Petrovski, V. Svátek, M. Stanojević, and O. Zamazal, “Knowledge Base Creation, Enrichment and Repair,” in Linked Open Data–Creating Knowledge Out of Interlinked Data, Springer, 2014, pp. 45-69.
    [BibTeX] [Download PDF]
    @InCollection{lod2_wp3,
    Title = {Knowledge Base Creation, Enrichment and Repair},
    Author = {Hellmann, Sebastian and Bryl, Volha and B{\"u}hmann, Lorenz and Dojchinovski, Milan and Kontokostas, Dimitris and Lehmann, Jens and Milo{\v{s}}evi{\'c}, Uro{\v{s}} and Petrovski, Petar and Sv{\'a}tek, Vojt{\v{e}}ch and Stanojevi{\'c}, Mladen and Zamazal, Ondrej},
    Booktitle = {Linked Open Data--Creating Knowledge Out of Interlinked Data},
    Publisher = {Springer},
    Year = {2014},
    Pages = {45--69},
    Keywords = {2014 group_aksw group_mole mole hellmann kilt sys:relevantFor:infai sys:relevantFor:bis peer-reviewed lehmann MOLE dojchinovski kontokostas buehmann},
    Url = {http://link.springer.com/chapter/10.1007%2F978-3-319-09846-3_3}
    }

  • K. Höffner and J. Lehmann, “Towards Question Answering on Statistical Linked Data,” in Proceedings of the 10th International Conference on Semantic Systems, New York, USA, 2014, pp. 61-64. doi:10.1145/2660517.2660521
    [BibTeX] [Download PDF]
    @InProceedings{cubeqashort,
    Title = {Towards {Q}uestion {A}nswering on Statistical {L}inked {D}ata},
    Author = {H\"{o}ffner, Konrad and Lehmann, Jens},
    Booktitle = {Proceedings of the 10th International Conference on Semantic Systems},
    Year = {2014},
    Address = {New York, USA},
    Note = {doi:10.1145/2660517.2660521},
    Pages = {61--64},
    Publisher = {Association for Computing Machinery},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2014/cubeqa/short/public.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1145/2660517.2660521},
    Doi = {10.1145/2660517.2660521},
    ISBN = {978-1-4503-2927-9},
    Keywords = {group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis topic_Search topic_Querying lehmann hoeffner 2014 MOLE cubeqa},
    Location = {Leipzig, Germany},
    Numpages = {4},
    Url = {http://svn.aksw.org/papers/2014/cubeqa/short/public.pdf}
    }

  • C. Stadler, P. Westphal, and J. Lehmann, “Jassa – A JavaScript suite for SPARQL-based faceted search,” in Proceedings of the ISWC Developers Workshop 2014, co-located with the 13th International Semantic Web Conference (ISWC 2014), Riva del Garda, Italy, October 19, 2014., 2014, pp. 31-36.
    [BibTeX] [Download PDF]
    @InProceedings{jassa,
    Title = {Jassa - {A} JavaScript suite for SPARQL-based faceted search},
    Author = {Claus Stadler and Patrick Westphal and Jens Lehmann},
    Booktitle = {Proceedings of the {ISWC} Developers Workshop 2014, co-located with the 13th International Semantic Web Conference {(ISWC} 2014), Riva del Garda, Italy, October 19, 2014.},
    Year = {2014},
    Pages = {31--36},
    Bdsk-url-1 = {http://ceur-ws.org/Vol-1268/paper6.pdf},
    Bibsource = {dblp computer science bibliography, http://dblp.org},
    Biburl = {http://dblp.uni-trier.de/rec/bib/conf/semweb/StadlerWL14},
    Crossref = {DBLP:conf/semweb/2014dev},
    Keywords = {2014 group_aksw geoknow topic_geospatial MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow lehmann westphal stadler},
    Timestamp = {Mon, 27 Oct 2014 20:39:35 +0100},
    Url = {http://ceur-ws.org/Vol-1268/paper6.pdf}
    }

2013

  • A. Ngonga Ngomo, L. Bühmann, C. Unger, J. Lehmann, and D. Gerber, “SPARQL2NL – Verbalizing SPARQL queries,” in Proc. of WWW 2013 Demos, 2013, pp. 329-332.
    [BibTeX] [Download PDF]
    @InProceedings{sparql2nl-demo,
    Title = {SPARQL2NL - Verbalizing SPARQL queries},
    Author = {Axel-Cyrille {Ngonga Ngomo} and Lorenz B{\"u}hmann and Christina Unger and Jens Lehmann and Daniel Gerber},
    Booktitle = {Proc. of WWW 2013 Demos},
    Year = {2013},
    Pages = {329-332},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2013/www_demo_sparql2nl.pdf},
    Keywords = {2013 MOLE group_aksw lehmann ngonga buehmann sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow topic_Exploration lod2page geoknow peer-reviewed bioasq},
    Owner = {jl},
    Timestamp = {2013.04.27},
    Url = {http://jens-lehmann.org/files/2013/www_demo_sparql2nl.pdf}
    }

  • A. N. Ngomo, J. Lehmann, and M. Hassan, “Transfer Learning of Link Specifications,” in Seventh IEEE International Conference on Semantic Computing (ICSC), 2013.
    [BibTeX] [Download PDF]
    @InProceedings{tl-icsc,
    Title = {Transfer Learning of Link Specifications},
    Author = {Axel-Cyrille Ngonga Ngomo and Jens Lehmann and Mofeed Hassan},
    Booktitle = {Seventh IEEE International Conference on Semantic Computing (ICSC)},
    Year = {2013},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/ICSC_TransferLearning/public.pdf},
    Keywords = {2013 group_aksw group_mole MOLE SIMBA lehmann ngonga hassan sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow topic_Interlinking peer-reviewed},
    Url = {http://svn.aksw.org/papers/2013/ICSC_TransferLearning/public.pdf}
    }

  • M. Martin, C. Stadler, P. Frischmuth, and J. Lehmann, “Increasing the Financial Transparency of European Commission Project Funding,” Semantic Web Journal, vol. Special Call for Linked Dataset descriptions, iss. 2, pp. 157-164, 2013.
    [BibTeX] [Abstract] [Download PDF]
    The Financial Transparency System (FTS) of the European Commission contains information about grants for European Union projects starting from 2007. It allows users to get an overview on EU funding, including information on beneficiaries as well as the amount and type of expenditure and information on the responsible EU department. The original dataset is freely available on the European Commission website, where users can query the data using an HTML form and download it in CSV and most recently XML format. In this article, we describe the transformation of this data to RDF and its interlinking with other datasets. We show that this allows interesting queries over the data, which were very difficult without this conversion. The main benefit of the dataset is an increased financial transparency of EU project funding. The RDF version of the FTS dataset will become part of the EU Open Data Portal and eventually be hosted and maintained by the European Union itself.

    @Article{martin-fts,
    Title = {Increasing the Financial Transparency of European Commission Project Funding},
    Author = {Michael Martin and Claus Stadler and Philipp Frischmuth and Jens Lehmann},
    Journal = {Semantic Web Journal},
    Year = {2013},
    Number = {2},
    Pages = {157-164},
    Volume = {Special Call for Linked Dataset descriptions},
    Abstract = {The Financial Transparency System (FTS) of the European Commission contains information about grants for European Union projects starting from 2007. It allows users to get an overview on EU funding, including information on beneficiaries as well as the amount and type of expenditure and information on the responsible EU department. The original dataset is freely available on the European Commission website, where users can query the data using an HTML form and download it in CSV and most recently XML format. In this article, we describe the transformation of this data to RDF and its interlinking with other datasets. We show that this allows interesting queries over the data, which were very difficult without this conversion. The main benefit of the dataset is an increased financial transparency of EU project funding. The RDF version of the FTS dataset will become part of the EU Open Data Portal and eventually be hosted and maintained by the European Union itself.},
    Bdsk-url-1 = {http://www.semantic-web-journal.net/system/files/swj435.pdf},
    Ee = {http://dx.doi.org/10.3233/SW-130116},
    Keywords = {2013 ES MOLE group_aksw martin lehmann auer stadler sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow lod2page peer-reviewed fts ontowiki frischmuth},
    Owner = {micha},
    Url = {http://www.semantic-web-journal.net/system/files/swj435.pdf}
    }

  • K. Lyko, K. Höffner, R. Speck, A. Ngonga Ngomo, and J. Lehmann, “SAIM—One Step Closer to Zero-Configuration Link Discovery,” in Proc. of the Extended Semantic Web Conference Posters & Demos, 2013.
    [BibTeX] [Download PDF]
    @InProceedings{Lyko2013,
    Title = {{SAIM}---{O}ne Step Closer to Zero-Configuration Link Discovery},
    Author = {Klaus Lyko and Konrad H\"offner and Ren\'e Speck and Axel-Cyrille {Ngonga Ngomo} and Jens Lehmann},
    Booktitle = {Proc. of the Extended Semantic Web Conference Posters \& Demos},
    Year = {2013},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2013/eswc_demo_saim.pdf},
    Keywords = {2013 group_aksw group_mole mole ngonga lehmann sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 topic_Interlinking sys:relevantFor:geoknow geoknow lod2page peer-reviewed MOLE simba limes speck lyko scms hoeffner},
    Owner = {jl},
    Timestamp = {2013.04.27},
    Url = {http://jens-lehmann.org/files/2013/eswc_demo_saim.pdf}
    }

  • A. Ngonga Ngomo, L. Bühmann, C. Unger, J. Lehmann, and D. Gerber., “Sorry, I don’t speak SPARQL — Translating SPARQL Queries into Natural Language,” in Proceedings of WWW, 2013.
    [BibTeX] [Download PDF]
    @InProceedings{NGO+13a,
    Title = {Sorry, I don't speak SPARQL --- Translating SPARQL Queries into Natural Language},
    Author = {Axel-Cyrille {Ngonga Ngomo} and Lorenz B{\"u}hmann and Christina Unger and Jens Lehmann and Daniel Gerber.},
    Booktitle = {Proceedings of WWW},
    Year = {2013},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2013/www_sparql2nl.pdf},
    Keywords = {2013 group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 ngonga lehmann buehmann gerber bioasq},
    Owner = {ngonga},
    Timestamp = {2013.03.09},
    Url = {http://jens-lehmann.org/files/2013/www_sparql2nl.pdf}
    }

  • S. Shekarpour, K. Höffner, J. Lehmann, and S. Auer, “Keyword Query Expansion on Linked Data Using Linguistic and Semantic Features,” in 7th IEEE International Conference on Semantic Computing, September 16-18, 2013, Irvine, California, USA, 2013, pp. 191-197.
    [BibTeX] [Abstract] [Download PDF]
    Effective search in structured information based on textual user input is of high importance in thousands of applications. Query expansion methods augment the original query of a user with alternative query elements with similar meaning to increase the chance of retrieving appropriate resources. In this work, we introduce a number of new query expansion features based on semantic and linguistic inferencing over Linked Open Data. We evaluate the effectiveness of each feature individually as well as their combinations employing several machine learning approaches. The evaluation is carried out on a training dataset extracted from the QALD Question Answering benchmark. Furthermore, we propose an optimized linear combination of linguistic and lightweight semantic features in order to predict the usefulness of each expansion candidate. Our experimental study shows a considerable improvement in precision and recall over baseline approaches.

    @InProceedings{ICSC2013Expansion,
    Title = {Keyword Query Expansion on Linked Data Using Linguistic and Semantic Features},
    Author = {Saeedeh Shekarpour and Konrad H{\"o}ffner and Jens Lehmann and S{\"o}ren Auer},
    Booktitle = {7th IEEE International Conference on Semantic Computing, September 16-18, 2013, Irvine, California, USA},
    Year = {2013},
    Pages = {191-197},
    Abstract = {Effective search in structured information based on textual user input is of high importance in thousands of applications. Query expansion methods augment the original query of a user with alternative query elements with similar meaning to increase the chance of retrieving appropriate resources. In this work, we introduce a number of new query expansion features based on semantic and linguistic inferencing over Linked Open Data. We evaluate the effectiveness of each feature individually as well as their combinations employing several machine learning approaches. The evaluation is carried out on a training dataset extracted from the QALD Question Answering benchmark. Furthermore, we propose an optimized linear combination of linguistic and lightweight semantic features in order to predict the usefulness of each expansion candidate. Our experimental study shows a considerable improvement in precision and recall over baseline approaches.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/ISWC2013_QueryExpansion/public.pdf},
    Ee = {http://doi.ieeecomputersociety.org/10.1109/ICSC.2013.41},
    Keywords = {shekarpour auer hoeffner lehmann group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page 2013 event_ICSC sys:relevantFor:geoknow topic_Search MOLE},
    Owner = {soeren},
    Timestamp = {2013.06.01},
    Url = {http://svn.aksw.org/papers/2013/ISWC2013_QueryExpansion/public.pdf}
    }

  • A. Zaveri, K. Nowick, and J. Lehmann, “Towards Biomedical Data Integration for Analyzing the Evolution of Cognition,” in To appear in Proceedings of Ontology and Data in Life Sciences Workshop (ODLS), 2013.
    [BibTeX] [Abstract] [Download PDF]
    Cognition is determined by function and interplay of several hundred if not thousand genes with a considerable overlap in the phenotypes and genes causing different cognitive diseases. We argue that these diseases should not be studied in isolation, but that data allowing to study them should be integrated. Ultimately, this will allow researchers to more easily answer questions, which would otherwise require time-consuming research. Specifically, we propose to use Linked Data publication, data integration and querying methods, which has been successfully used in other life science domains. In this initial effort, we converted 12 different datasets, integrate them and provide a first demonstration of the added value by showing how a set of relevant queries over the integrated data can be answered.

    @InProceedings{zaveri2013a,
    Title = {Towards Biomedical Data Integration for Analyzing the Evolution of Cognition},
    Author = {Amrapali Zaveri and Katja Nowick and Jens Lehmann},
    Booktitle = {To appear in Proceedings of Ontology and Data in Life Sciences Workshop (ODLS)},
    Year = {2013},
    Abstract = {Cognition is determined by function and interplay of several hundred if not thousand genes with a considerable overlap in the phenotypes and genes causing different cognitive diseases. We argue that these diseases should not be studied in isolation, but that data allowing to study them should be integrated. Ultimately, this will allow researchers to more easily answer questions, which would otherwise require time-consuming research. Specifically, we propose to use Linked Data publication, data integration and querying methods, which has been successfully used in other life science domains. In this initial effort, we converted 12 different datasets, integrate them and provide a first demonstration of the added value by showing how a set of relevant queries over the integrated data can be answered.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/ISemantics_DBpediaDQ/public.pdf},
    Date-added = {2013-06-04 19:23:09 +0000},
    Date-modified = {2013-07-11 19:42:46 +0000},
    Keywords = {2013 zaveri nowick lehmann group_aksw sys:relevantFor:infai sys:relevantFor:bis MOLE cogevo},
    Owner = {Amrapali},
    Url = {http://jens-lehmann.org/files/2013/odls_cogevo.pdf}
    }

  • A. Zaveri, J. Lehmann, S. Auer, M. M. Hassan, M. A. Sherif, and M. Martin, “Publishing and Interlinking the Global Health Observatory Dataset,” Semantic Web Journal, vol. Special Call for Linked Dataset descriptions, iss. 3, pp. 315-322, 2013.
    [BibTeX] [Abstract] [Download PDF]
    The improvement of public health is one of the main indicators for societal progress. Statistical data for monitoring public health is highly relevant for a number of sectors, such as research (e.g. in the life sciences or economy), policy making, health care, pharmaceutical industry, insurances etc. Such data is meanwhile available even on a global scale, e.g. in the Global Health Observatory (GHO) of the United Nations’s World Health Organization (WHO). GHO comprises more than 50 different datasets, it covers all 198 WHO member countries and is updated as more recent or revised data becomes available or when there are changes to the methodology being used. However, this data is only accessible via complex spreadsheets and, therefore, queries over the 50 different datasets as well as combinations with other datasets are very tedious and require a significant amount of manual work. By making the data available as RDF, we lower the barrier for data re-use and integration. In this article, we describe the conversion and publication process as well as use cases, which can be implemented using the GHO data.

    @Article{zaveri-gho,
    Title = {Publishing and Interlinking the Global Health Observatory Dataset},
    Author = {Amrapali Zaveri and Jens Lehmann and S{\"o}ren Auer and Mofeed M. Hassan and Mohamed A. Sherif and Michael Martin},
    Journal = {Semantic Web Journal},
    Year = {2013},
    Number = {3},
    Pages = {315-322},
    Volume = {Special Call for Linked Dataset descriptions},
    Abstract = {The improvement of public health is one of the main indicators for societal progress. Statistical data for monitoring public health is highly relevant for a number of sectors, such as research (e.g. in the life sciences or economy), policy making, health care, pharmaceutical industry, insurances etc. Such data is meanwhile available even on a global scale, e.g. in the Global Health Observatory (GHO) of the United Nations's World Health Organization (WHO). GHO comprises more than 50 different datasets, it covers all 198 WHO member countries and is updated as more recent or revised data becomes available or when there are changes to the methodology being used. However, this data is only accessible via complex spreadsheets and, therefore, queries over the 50 different datasets as well as combinations with other datasets are very tedious and require a significant amount of manual work. By making the data available as RDF, we lower the barrier for data re-use and integration. In this article, we describe the conversion and publication process as well as use cases, which can be implemented using the GHO data. },
    Bdsk-url-1 = {http://www.semantic-web-journal.net/system/files/swj433.pdf},
    Date-modified = {2013-07-11 19:43:06 +0000},
    Ee = {http://dx.doi.org/10.3233/SW-130102},
    Keywords = {2013 MOLE group_aksw zaveri martin lehmann auer hassan sherif sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page peer-reviewed gho},
    Owner = {micha},
    Url = {http://www.semantic-web-journal.net/system/files/swj433.pdf}
    }

  • A. Zaveri, D. Kontokostas, M. A. Sherif, L. Bühmann, M. Morsey, S. Auer, and J. Lehmann, “User-driven Quality Evaluation of DBpedia,” in Proceedings of 9th International Conference on Semantic Systems, I-SEMANTICS ’13, Graz, Austria, September 4-6, 2013, 2013, pp. 97-104.
    [BibTeX] [Abstract] [Download PDF]
    Linked Open Data (LOD) comprises of an unprecedented volume of structured datasets on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowdsourced and even extracted data of relatively low quality. We present a methodology for assessing the quality of linked data resources, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia. We identified 17 data quality problem types and 58 users assessed a total of 521 resources. Overall, 11.93\% of the evaluated DBpedia triples were identified to have some quality issues. Applying the semi-automatic component yielded a total of 222,982 triples that have a high probability to be incorrect. In particular, we found that problems such as object values being incorrectly extracted, irrelevant extraction of information and broken links were the most recurring quality problems. With this study, we not only aim to assess the quality of this sample of DBpedia resources but also adopt an agile methodology to improve the quality in future versions by regularly providing feedback to the DBpedia maintainers.

    @InProceedings{zaveri2013,
    Title = {User-driven Quality Evaluation of DBpedia},
    Author = {Amrapali Zaveri and Dimitris Kontokostas and Mohamed Ahmed Sherif and Lorenz B\"uhmann and Mohamed Morsey and S\"oren Auer and Jens Lehmann},
    Booktitle = {Proceedings of 9th International Conference on Semantic Systems, I-SEMANTICS '13, Graz, Austria, September 4-6, 2013},
    Year = {2013},
    Pages = {97-104},
    Publisher = {ACM},
    Abstract = {Linked Open Data (LOD) comprises of an unprecedented volume of structured datasets on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowdsourced and even extracted data of relatively low quality. We present a methodology for assessing the quality of linked data resources, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia. We identified 17 data quality problem types and 58 users assessed a total of 521 resources. Overall, 11.93\% of the evaluated DBpedia triples were identified to have some quality issues. Applying the semi-automatic component yielded a total of 222,982 triples that have a high probability to be incorrect. In particular, we found that problems such as object values being incorrectly extracted, irrelevant extraction of information and broken links were the most recurring quality problems. With this study, we not only aim to assess the quality of this sample of DBpedia resources but also adopt an agile methodology to improve the quality in future versions by regularly providing feedback to the DBpedia maintainers.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/ISemantics_DBpediaDQ/public.pdf},
    Date-modified = {2015-02-06 06:56:39 +0000},
    Ee = {http://doi.acm.org/10.1145/2506182.2506195},
    Keywords = {zaveri sherif morsey buemann kontokostas auer lehmann group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page 2013 event_I-Semantics dbpediadq sys:relevantFor:geoknow topic_QualityAnalysis dataquality MOLE buehmann},
    Owner = {soeren},
    Timestamp = {2013.06.01},
    Url = {http://svn.aksw.org/papers/2013/ISemantics_DBpediaDQ/public.pdf}
    }

  • J. Lehmann, Q. Nguyen, and T. Ermilov, “Can we Create Better Links by Playing Games?,” in 7th IEEE International Conference on Semantic Computing, September 16-18, 2013, Irvine, California, USA, 2013, pp. 322-329.
    [BibTeX] [Download PDF]
    @InProceedings{veriLinks,
    Title = {Can we Create Better Links by Playing Games?},
    Author = {Jens Lehmann and Quan Nguyen and Timofey Ermilov},
    Booktitle = {7th IEEE International Conference on Semantic Computing, September 16-18, 2013, Irvine, California, USA},
    Year = {2013},
    Pages = {322-329},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2013/icsc_verilinks.pdf},
    Keywords = {ermilov nguyen lehmann group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow lod2page 2013 event_ICSC},
    Url = {http://jens-lehmann.org/files/2013/icsc_verilinks.pdf}
    }

  • J. Lehmann, K. Höffner, S. Prätor, S. Lehmann, A. Ngonga Ngomo, A. Garcia-Rojas, and S. Athanasiou, “GeoKnow: Geo-Anwendungen im Daten-Web,” gis.Business, iss. 5, pp. 48-51, 2013.
    [BibTeX]
    @Article{geoknow_business,
    Title = {GeoKnow: Geo-Anwendungen im Daten-Web},
    Author = {Jens Lehmann and Konrad H{\"o}ffner and Sandra Pr{\"a}tor and Stephanie Lehmann and Axel-Cyrille {Ngonga Ngomo} and Alejandra Garcia-Rojas and Spiros Athanasiou},
    Journal = {gis.Business},
    Year = {2013},
    Number = {5},
    Pages = {48--51},
    Keywords = {lehmann group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow lod2page 2013 hoeffner}
    }

  • L. Bühmann and J. Lehmann, “Pattern Based Knowledge Base Enrichment,” in The Semantic Web — ISWC 2013, H. Alani, L. Kagal, A. Fokoue, P. Groth, C. Biemann, J. Parreira, L. Aroyo, N. Noy, C. Welty, and K. Janowicz, Eds., Springer Berlin Heidelberg, 2013, vol. 8218, pp. 33-48. doi:10.1007/978-3-642-41335-3_3
    [BibTeX] [Download PDF]
    @InCollection{pattern_enrichment,
    Title = {Pattern Based Knowledge Base Enrichment},
    Author = {B{\"u}hmann, Lorenz and Lehmann, Jens},
    Booktitle = {The Semantic Web -- ISWC 2013},
    Publisher = {Springer Berlin Heidelberg},
    Year = {2013},
    Editor = {Alani, Harith and Kagal, Lalana and Fokoue, Achille and Groth, Paul and Biemann, Chris and Parreira, JosianeXavier and Aroyo, Lora and Noy, Natasha and Welty, Chris and Janowicz, Krzysztof},
    Pages = {33-48},
    Series = {Lecture Notes in Computer Science},
    Volume = {8218},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/ISWC_Pattern_Enrichment/public.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1007/978-3-642-41335-3_3},
    Doi = {10.1007/978-3-642-41335-3_3},
    ISBN = {978-3-642-41334-6},
    Keywords = {buehmann lehmann group_aksw group_mole MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page 2013 event_ISWC dllearner ore sys:relevantFor:geoknow topic_Enrichment},
    Language = {English},
    Owner = {lorenz},
    Timestamp = {2014.11.24},
    Url = {http://svn.aksw.org/papers/2013/ISWC_Pattern_Enrichment/public.pdf}
    }

  • S. Auer, J. Lehmann, A. N. Ngomo, and A. Zaveri, “Introduction to Linked Data and Its Lifecycle on the Web,” in Reasoning Web, 2013, pp. 1-90.
    [BibTeX] [Download PDF]
    @InProceedings{AUE+13,
    Title = {Introduction to Linked Data and Its Lifecycle on the Web},
    Author = {S{\"o}ren Auer and Jens Lehmann and Axel-Cyrille Ngonga Ngomo and Amrapali Zaveri},
    Booktitle = {Reasoning Web},
    Year = {2013},
    Pages = {1-90},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2013/reasoning_web_linked_data.pdf},
    Date-modified = {2012-12-02 13:07:41 +0000},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow topic_GeoSemWeb SIMBA zaveri ngonga auer lehmann MOLE simba ngonga 2013},
    Url = {http://jens-lehmann.org/files/2013/reasoning_web_linked_data.pdf}
    }

  • S. Auer, J. Lehmann, A. N. Ngomo, C. Stadler, and J. Unbehauen, “Extraktion, Mapping und Verlinkung von Daten im Web,” Datenbank Spektrum, vol. 13, iss. 2, pp. 77-87, 2013.
    [BibTeX] [Download PDF]
    @Article{AUE+13a,
    Title = {Extraktion, Mapping und Verlinkung von Daten im Web},
    Author = {S{\"o}ren Auer and Jens Lehmann and Axel-Cyrille Ngonga Ngomo and Claus Stadler and J{\"o}rg Unbehauen},
    Journal = {Datenbank Spektrum},
    Year = {2013},
    Number = {2},
    Pages = {77-87},
    Volume = {13},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2013/db_spektrum_linked_data.pdf},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis mole simba lehmann MOLE ngonga auer unbehauen stadler linkinglod 2013 sys:relevantFor:geoknow topic_Extraction,topic_Interlinking,topic_Map},
    Owner = {ngonga},
    Timestamp = {2013.07.05},
    Url = {http://jens-lehmann.org/files/2013/db_spektrum_linked_data.pdf}
    }

  • I. Ermilov, M. Martin, J. Lehmann, and S. Auer, “Linked Open Data Statistics: Collection and Exploitation,” in Proceedings of the 4th Conference on Knowledge Engineering and Semantic Web, 2013.
    [BibTeX] [Download PDF]
    @InProceedings{ermilov-2013-kesw,
    Title = {Linked Open Data Statistics: Collection and Exploitation},
    Author = {Ivan Ermilov and Michael Martin and Jens Lehmann and S{\"o}ren Auer},
    Booktitle = {Proceedings of the 4th Conference on Knowledge Engineering and Semantic Web},
    Year = {2013},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/KESW_LODStats_Demo/public.pdf},
    Keywords = {group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow topic_Statistics auer lehmann martin iermilov 2013 lodstats},
    Owner = {ivan},
    Timestamp = {2013.09.07},
    Url = {http://svn.aksw.org/papers/2013/KESW_LODStats_Demo/public.pdf}
    }

  • A. Garcia-Rojas, S. Athanasiou, J. Lehmann, and D. Hladky, “GeoKnow: Leveraging Geospatial Data in the Web of Data,” in Open Data on the Web Workshop, 2013.
    [BibTeX] [Download PDF]
    @InProceedings{Garcia-Rojas2013,
    Title = {{GeoKnow}: Leveraging Geospatial Data in the Web of Data},
    Author = {Alejandra Garcia-Rojas and Spiros Athanasiou and Jens Lehmann and Daniel Hladky},
    Booktitle = {Open Data on the Web Workshop},
    Year = {2013},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2013/odw_geoknow.pdf},
    Keywords = {2013 MOLE group_aksw lehmann sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 sys:relevantFor:geoknow topic_geospatial lod2page geoknow peer-reviewed},
    Owner = {jl},
    Timestamp = {2013.04.27},
    Url = {http://jens-lehmann.org/files/2013/odw_geoknow.pdf}
    }

  • D. Kontokostas, A. Zaveri, S. Auer, and J. Lehmann, “TripleCheckMate: A Tool for Crowdsourcing the Quality Assessment of Linked Data,” in Proceedings of the 4th Conference on Knowledge Engineering and Semantic Web, 2013.
    [BibTeX] [Download PDF]
    @InProceedings{kontokostas-2013-kesw,
    Title = {TripleCheckMate: A Tool for Crowdsourcing the Quality Assessment of Linked Data},
    Author = {Dimitris Kontokostas and Amrapali Zaveri and S\"oren Auer and Jens Lehmann},
    Booktitle = {Proceedings of the 4th Conference on Knowledge Engineering and Semantic Web},
    Year = {2013},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2013/kesw_triplecheckmate.pdf},
    Date-modified = {2015-02-06 06:57:04 +0000},
    Keywords = {group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow topic_QualityAnalysis auer topic_QualityAnalysis lehmann kontokostas zaveri 2013 dataquality},
    Url = {http://jens-lehmann.org/files/2013/kesw_triplecheckmate.pdf}
    }

  • S. Hellmann, J. Lehmann, S. Auer, and M. Brümmer, “Integrating NLP using Linked Data,” in 12th International Semantic Web Conference, 21-25 October 2013, Sydney, Australia, 2013.
    [BibTeX] [Abstract] [Download PDF]
    We are currently observing a plethora of Natural Language Processing tools and services being made available. Each of the tools and services has its particular strengths and weaknesses, but exploiting the strengths and synergistically combining different tools is currently an extremely cumbersome and time consuming task. Also, once a particular set of tools is integrated, this integration is not reusable by others. We argue that simplifying the interoperability of different NLP tools performing similar but also complementary tasks will facilitate the comparability of results and the creation of sophisticated NLP applications. In this paper, we present the NLP Interchange Format (NIF). NIF is based on a Linked Data enabled URI scheme for identifying elements in (hyper-)texts and an ontology for describing common NLP terms and concepts. In contrast to more centralized solutions such as UIMA and GATE, NIF enables the creation of heterogeneous, distributed and loosely coupled NLP applications, which use the Web as an integration platform. We present several use cases of the second version of the NIF specification (NIF 2.0) and the result of a developer study.

    @InProceedings{Hellmann-2013-iswc,
    Title = {Integrating NLP using Linked Data},
    Author = {Sebastian Hellmann and Jens Lehmann and S\"oren Auer and Martin Br{\"u}mmer},
    Booktitle = {12th International Semantic Web Conference, 21-25 October 2013, Sydney, Australia},
    Year = {2013},
    Abstract = {We are currently observing a plethora of Natural Language Processing tools and services being made available. Each of the tools and services has its particular strengths and weaknesses, but exploiting the strengths and synergistically combining different tools is currently an extremely cumbersome and time consuming task. Also, once a particular set of tools is integrated, this integration is not reusable by others. We argue that simplifying the interoperability of different NLP tools performing similar but also complementary tasks will facilitate the comparability of results and the creation of sophisticated NLP applications. In this paper, we present the NLP Interchange Format (NIF). NIF is based on a Linked Data enabled URI scheme for identifying elements in (hyper-)texts and an ontology for describing common NLP terms and concepts. In contrast to more centralized solutions such as UIMA and GATE, NIF enables the creation of heterogeneous, distributed and loosely coupled NLP applications, which use the Web as an integration platform. We present several use cases of the second version of the NIF specification (NIF 2.0) and the result of a developer study.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/ISWC_NIF/public.pdf},
    Keywords = {hellmann kilt bruemmer lehmann MOLE auer group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page 2013 event_ISWC kilt},
    Owner = {soeren},
    Timestamp = {2013.06.01},
    Url = {http://svn.aksw.org/papers/2013/ISWC_NIF/public.pdf}
    }

  • K. Höffner, C. Unger, L. Bühmann, J. Lehmann, A. N. Ngomo, D. Gerber, and P. Cimiano, “User Interface for a Template Based Question Answering System,” in Proceedings of the 4th Conference on Knowledge Engineering and Semantic Web, 2013, pp. 258-264.
    [BibTeX] [Download PDF]
    @InProceedings{hoeffner-2013-kesw,
    Title = {User Interface for a Template Based {Q}uestion {A}nswering System},
    Author = {Konrad H{\"o}ffner and Christina Unger and Lorenz B{\"u}hmann and Jens Lehmann and Axel-Cyrille Ngonga Ngomo and Daniel Gerber and Phillip Cimiano},
    Booktitle = {Proceedings of the 4th Conference on Knowledge Engineering and Semantic Web},
    Year = {2013},
    Pages = {258-264},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/KESW_AutoSparqlTbsl_Demo/public.pdf},
    Ee = {http://dx.doi.org/10.1007/978-3-642-41360-5_21},
    Keywords = {group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:geoknow ngonga topic_Search topic_Querying lehmann hoeffner 2013 autosparql tbsl buehmann},
    Url = {http://svn.aksw.org/papers/2013/KESW_AutoSparqlTbsl_Demo/public.pdf}
    }

  • M. Acosta, A. Zaveri, E. Simperl, D. Kontokostas, S. Auer, and J. Lehmann, “Crowdsourcing Linked Data quality assessment,” in 12th International Semantic Web Conference, 21-25 October 2013, Sydney, Australia, 2013, pp. 260-276.
    [BibTeX] [Abstract] [Download PDF]
    In this paper we look into the use of crowdsourcing as a means to handle Linked Data quality problems that are challenging to be solved automatically. We analyzed the most common errors encountered in Linked Data sources and classified them according to the extent to which they are likely to be amenable to a specific crowdsourcing approach. Based on this analysis, we implemented and compared two quality assessment methods for Linked Data that leverage the wisdom of the crowds in different ways: (i) a contest format targeting an expert crowd of researchers and Linked Data enthusiasts; and (ii) paid microtasks published on Amazon Mechanical Turk. We evaluated the two methods empirically in terms of their capacity to spot quality issues in DBpedia and investigated how the contributions of the two crowds could be optimally integrated into Linked Data curation processes. The results showed that the two styles of crowdsourcing are complementary, and that crowdsourcing-enabled quality assessment is a promising and affordable way to enhance the quality of Linked Data sets.

    @InProceedings{Acosta2013,
    Title = {Crowdsourcing Linked Data quality assessment},
    Author = {Maribel Acosta and Amrapali Zaveri and Elena Simperl and Dimitris Kontokostas and S\"oren Auer and Jens Lehmann},
    Booktitle = {12th International Semantic Web Conference, 21-25 October 2013, Sydney, Australia},
    Year = {2013},
    Pages = {260-276},
    Abstract = {In this paper we look into the use of crowdsourcing as a means to handle Linked Data quality problems that are challenging to be solved automatically. We analyzed the most common errors encountered in Linked Data sources and classified them according to the extent to which they are likely to be amenable to a specific crowdsourcing approach. Based on this analysis, we implemented and compared two quality assessment methods for Linked Data that leverage the wisdom of the crowds in different ways: (i) a contest format targeting an expert crowd of researchers and Linked Data enthusiasts; and (ii) paid microtasks published on Amazon Mechanical Turk. We evaluated the two methods empirically in terms of their capacity to spot quality issues in DBpedia and investigated how the contributions of the two crowds could be optimally integrated into Linked Data curation processes. The results showed that the two styles of crowdsourcing are complementary, and that crowdsourcing-enabled quality assessment is a promising and affordable way to enhance the quality of Linked Data sets.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/ISWC_Crowdsourcing/public.pdf},
    Date-modified = {2015-02-06 06:57:18 +0000},
    Keywords = {zaveri auer lehmann kontokostas group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page 2013 event_ISWC dbpediadqcrowd sys:relevantFor:geoknow topic_Crowdsourcing, topic_QualityAnalysis dataquamole MOLE},
    Owner = {soeren},
    Timestamp = {2013.06.01},
    Url = {http://svn.aksw.org/papers/2013/ISWC_Crowdsourcing/public.pdf}
    }

2012

  • M. Morsey, J. Lehmann, S. Auer, and A. Ngonga Ngomo, “Usage-Centric Benchmarking of RDF Triple Stores,” in Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI 2012), 2012.
    [BibTeX] [Download PDF]
    @InProceedings{MOR+12,
    Title = {Usage-{C}entric {B}enchmarking of {RDF} {T}riple {S}tores},
    Author = {Mohamed Morsey and Jens Lehmann and S{\"o}ren Auer and Axel-Cyrille {Ngonga Ngomo}},
    Booktitle = {Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI 2012)},
    Year = {2012},
    Bdsk-url-1 = {http://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/download/5168/5384},
    Date-modified = {2012-12-02 13:10:44 +0000},
    Keywords = {2012 group_aksw SIMBA sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 morsey ngonga lehmann auer MOLE dbpsb limes},
    Owner = {mohamed},
    Timestamp = {2012.07.26},
    Url = {http://www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/download/5168/5384}
    }

  • J. Lehmann, D. Gerber, M. Morsey, and A. Ngonga Ngomo, “DeFacto – Deep Fact Validation,” in Proc. of the International Semantic Web Conference, 2012.
    [BibTeX] [Download PDF]
    @InProceedings{LEH+12a,
    Title = {DeFacto - Deep Fact Validation},
    Author = {Jens Lehmann and Daniel Gerber and Mohamed Morsey and Axel-Cyrille {Ngonga Ngomo}},
    Booktitle = {Proc. of the International Semantic Web Conference},
    Year = {2012},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2012/iswc_defacto.pdf},
    Keywords = {2012 group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis ngonga lehmann gerber boa defacto morsey},
    Url = {http://jens-lehmann.org/files/2012/iswc_defacto.pdf}
    }

  • J. Lehmann, T. Furche, G. Grasso, A. Ngonga Ngomo, C. Schallhart, A. Sellers, C. Unger, L. Bühmann, D. Gerber, K. Höffner, D. Liu, and S. Auer, “DEQA: Deep Web Extraction for Question Answering,” in Proceedings of ISWC, 2012.
    [BibTeX] [Download PDF]
    @InProceedings{Lehmann2012,
    Title = {DEQA: Deep Web Extraction for Question Answering},
    Author = {Jens Lehmann and Tim Furche and Giovanni Grasso and Axel-Cyrille {Ngonga Ngomo} and Christian Schallhart and Andrew Sellers and Christina Unger and Lorenz B{\"u}hmann and Daniel Gerber and Konrad H{\"o}ffner and David Liu and S{\"o}ren Auer},
    Booktitle = {Proceedings of ISWC},
    Year = {2012},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2012/iswc_deqa.pdf},
    Date-modified = {2012-12-02 12:51:46 +0000},
    Keywords = {2012 group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis ngonga lehmann gerber buehmann boa auer hoeffner limes},
    Owner = {ngonga},
    Timestamp = {2012.09.18},
    Url = {http://jens-lehmann.org/files/2012/iswc_deqa.pdf}
    }

  • J. Lehmann, T. Furche, G. Grasso, A. Ngonga Ngomo, C. Schallhart, A. Sellers, C. Unger, L. Bühmann, D. Gerber, K. Höffner, D. Liu, and S. Auer, “DEQA: Deep Web Extraction for Question Answering,” in Proceedings of ISWC, 2012.
    [BibTeX] [Download PDF]
    @InProceedings{LEH+12b,
    Title = {DEQA: Deep Web Extraction for {Q}uestion {A}nswering},
    Author = {Jens Lehmann and Tim Furche and Giovanni Grasso and Axel-Cyrille {Ngonga Ngomo} and Christian Schallhart and Andrew Sellers and Christina Unger and Lorenz B{\"u}hmann and Daniel Gerber and Konrad H{\"o}ffner and David Liu and S{\"o}ren Auer},
    Booktitle = {Proceedings of ISWC},
    Year = {2012},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2012/iswc_deqa.pdf},
    Date-modified = {2012-12-02 12:51:46 +0000},
    Keywords = {2012 group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis ngonga lehmann gerber buehmann boa auer hoeffner limes},
    Owner = {ngonga},
    Timestamp = {2012.09.18},
    Url = {http://jens-lehmann.org/files/2012/iswc_deqa.pdf}
    }

  • M. Morsey, J. Lehmann, S. Auer, C. Stadler, and S. Hellmann, “DBpedia and the Live Extraction of Structured Data from Wikipedia,” Program: electronic library and information systems, vol. 46, p. 27, 2012.
    [BibTeX] [Abstract] [Download PDF]
    Purpose – DBpedia extracts structured information from Wikipedia, interlinks it with other knowledge bases and freely publishes the results on the Web using Linked Data and SPARQL. However, the DBpedia release process is heavy-weight and releases are sometimes based on several months old data. DBpedia-Live solves this problem by providing a live synchronization method based on the update stream of Wikipedia. Design/methodology/approach – Wikipedia provides DBpedia with a continuous stream of updates, i.e. a stream of recently updated articles. DBpedia-Live processes that stream on the fly to obtain RDF data and stores the extracted data back to DBpedia. DBpedia-Live publishes the newly added/deleted triples in files, in order to enable synchronization between our DBpedia endpoint and other DBpedia mirrors. Findings – During the realization of DBpedia-Live we learned, that it is crucial to process Wikipedia updates in a priority queue. Recently-updated Wikipedia articles should have the highest priority, over mapping-changes and unmodified pages. An overall finding is that there is a plenty of opportunities arising from the emerging Web of Data for librarians. Practical implications – DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Many companies and researchers use DBpedia and its public services to improve their applications and research approaches. The DBpedia-Live framework improves DBpedia further by timely synchronizing it with Wikipedia, which is relevant for many use cases requiring up-to-date information. Originality/value – The new DBpedia-Live framework adds new features to the old DBpedia-Live framework, e.g. abstract extraction, ontology changes, and changesets publication.

    @Article{dbpedia_live_2012,
    Title = {{DB}pedia and the {L}ive {E}xtraction of {S}tructured {D}ata from {W}ikipedia},
    Author = {Mohamed Morsey and Jens Lehmann and S{\"o}ren Auer and Claus Stadler and Sebastian Hellmann},
    Journal = {Program: electronic library and information systems},
    Year = {2012},
    Pages = {27},
    Volume = {46},
    Abstract = {Purpose - DBpedia extracts structured information from Wikipedia, interlinks it with other knowledge bases and freely publishes the results on the Web using Linked Data and SPARQL. However, the DBpedia release process is heavy-weight and releases are sometimes based on several months old data. DBpedia-Live solves this problem by providing a live synchronization method based on the update stream of Wikipedia. Design/methodology/approach - Wikipedia provides DBpedia with a continuous stream of updates, i.e. a stream of recently updated articles. DBpedia-Live processes that stream on the fly to obtain RDF data and stores the extracted data back to DBpedia. DBpedia-Live publishes the newly added/deleted triples in files, in order to enable synchronization between our DBpedia endpoint and other DBpedia mirrors. Findings - During the realization of DBpedia-Live we learned, that it is crucial to process Wikipedia updates in a priority queue. Recently-updated Wikipedia articles should have the highest priority, over mapping-changes and unmodified pages. An overall finding is that there is a plenty of opportunities arising from the emerging Web of Data for librarians. Practical implications - DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Many companies and researchers use DBpedia and its public services to improve their applications and research approaches. The DBpedia-Live framework improves DBpedia further by timely synchronizing it with Wikipedia, which is relevant for many use cases requiring up-to-date information. Originality/value - The new DBpedia-Live framework adds new features to the old DBpedia-Live framework, e.g. abstract extraction, ontology changes, and changesets publication.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2011/DBpedia_Live/public.pdf},
    Date-modified = {2012-12-02 13:06:10 +0000},
    Keywords = {2012 group_aksw MOLE sys:relevantFor:bis sys:relevantFor:lod2 morsey lehmann auer stadler hellmann kilt},
    Owner = {mohamed},
    Timestamp = {2012.04.13},
    Url = {http://svn.aksw.org/papers/2011/DBpedia_Live/public.pdf}
    }

  • A. Ngonga Ngomo, J. Lehmann, S. Auer, and K. Höffner, “RAVEN — Towards Zero-Configuration Link Discovery,” 2012.
    [BibTeX] [Download PDF]
    @TechReport{NgongaNgomo2012,
    Title = {RAVEN -- Towards Zero-Configuration Link Discovery},
    Author = {Axel-Cyrille {Ngonga Ngomo} and Jens Lehmann and S{\"o}ren Auer and Konrad H{\"o}ffner},
    Year = {2012},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2012/raven_report.pdf},
    Institute = {University of Leipzig},
    Keywords = {2012 group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis ngonga lehmann auer hoeffner scms},
    Timestamp = {2017.10.12},
    Url = {http://jens-lehmann.org/files/2012/raven_report.pdf}
    }

  • C. Unger, L. Bühmann, J. Lehmann, A. Ngonga Ngomo, D. Gerber, and P. Cimiano, “Template-based Question Answering over RDF data,” in Proceedings of the 21st international conference on World Wide Web, 2012, pp. 639-648.
    [BibTeX] [Download PDF]
    @InProceedings{unger2012template,
    Title = {Template-based {Q}uestion {A}nswering over {RDF} data},
    Author = {Unger, Christina and B{\"u}hmann, Lorenz and Lehmann, Jens and Ngonga Ngomo, Axel-Cyrille and Gerber, Daniel and Cimiano, Philipp},
    Booktitle = {Proceedings of the 21st international conference on World Wide Web},
    Year = {2012},
    Pages = {639--648},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2012/tbsl_www.pdf},
    Keywords = {2012 group_aksw SIMBA sys:relevantFor:infai boa sys:relevantFor:bis ngonga lehmann geber MOLE buehmann autosparql},
    Owner = {ngonga},
    Url = {http://jens-lehmann.org/files/2012/tbsl_www.pdf}
    }

  • C. Stadler, J. Lehmann, K. Höffner, and S. Auer, “LinkedGeoData: A Core for a Web of Spatial Open Data,” Semantic Web Journal, vol. 3, iss. 4, pp. 333-354, 2012.
    [BibTeX] [Download PDF]
    @Article{SLHA11,
    Title = {LinkedGeoData: A Core for a Web of Spatial Open Data},
    Author = {Claus Stadler and Jens Lehmann and Konrad H{\"o}ffner and S{\"o}ren Auer},
    Journal = {Semantic Web Journal},
    Year = {2012},
    Number = {4},
    Pages = {333-354},
    Volume = {3},
    Bdsk-url-1 = {http://www.semantic-web-journal.net/sites/default/files/swj173_2.pdf},
    Date-modified = {2012-12-02 13:09:43 +0000},
    Keywords = {sys:relevantFor:bis sys:relevantFor:infai stadler lehmann hoeffner auer MOLE lgd 2012 group_aksw},
    Owner = {stadler},
    Timestamp = {2011.09.09},
    Url = {http://jens-lehmann.org/files/2012/linkedgeodata2.pdf}
    }

  • A. Ngonga Ngomo, J. Lehmann, S. Auer, and K. Höffner, “RAVEN—Towards Zero-Configuration Link Discovery,” 2012.
    [BibTeX] [Download PDF]
    @TechReport{raven_report,
    Title = {{RAVEN}---{T}owards Zero-Configuration Link Discovery},
    Author = {Axel-Cyrille {Ngonga Ngomo} and Jens Lehmann and S{\"o}ren Auer and Konrad H{\"o}ffner},
    Year = {2012},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2012/raven_report.pdf},
    Institute = {University of Leipzig},
    Keywords = {2012 group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis ngonga lehmann auer hoeffner scms},
    Url = {http://jens-lehmann.org/files/2012/raven_report.pdf}
    }

  • S. Hellmann, J. Lehmann, J. Unbehauen, C. Stadler, T. N. Lam, and M. Strohmaier, “Navigation-induced Knowledge Engineering by Example,” in JIST, 2012.
    [BibTeX] [Download PDF]
    @InProceedings{hellmann-jist-2012-NKE,
    Title = {Navigation-induced Knowledge Engineering by Example},
    Author = {Sebastian Hellmann and Jens Lehmann and J{\"o}rg Unbehauen and Claus Stadler and Thanh Nghia Lam and Markus Strohmaier},
    Booktitle = {JIST},
    Year = {2012},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2012/JIST_NKE/public.pdf},
    Date-modified = {2012-12-05 07:35:57 +0000},
    Keywords = {2012 group_aksw event_jist group_mole MOLE hellmann kilt stadler unbehauen strohmaier sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page peer-reviewed lehmann dllearner},
    Owner = {sebastian},
    Timestamp = {2012.10.19},
    Url = {http://svn.aksw.org/papers/2012/JIST_NKE/public.pdf}
    }

  • S. Hellmann, J. Lehmann, S. Auer, and M. Nitzschke, “NIF Combinator: Combining NLP Tool Output,” in EKAW, 2012, pp. 446-449.
    [BibTeX]
    @InProceedings{hellmann-2012-nif-combinator,
    Title = {NIF Combinator: Combining NLP Tool Output},
    Author = {Sebastian Hellmann and Jens Lehmann and S{\"o}ren Auer and Marcus Nitzschke},
    Booktitle = {EKAW},
    Year = {2012},
    Note = {published at EKAW, preprint http://jens-lehmann.org/files/2012/ekaw_nif_combinator.pdf},
    Pages = {446-449},
    Date-modified = {2012-12-02 13:03:56 +0000},
    Ee = {http://dx.doi.org/10.1007/978-3-642-33876-2_44},
    Keywords = {2012 hellmann kilt lehmann auer nitzschke group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis peer-reviewed sys:relevantFor:lod2 lod2page kilt}
    }

  • D. Cherix, S. Hellmann, and J. Lehmann, “Improving the Performance of a SPARQL Component for Semantic Web Applications,” in JIST, 2012.
    [BibTeX] [Download PDF]
    @InProceedings{hellmann-jist-2012-sparql,
    Title = {Improving the Performance of a {SPARQL} Component for Semantic Web Applications},
    Author = {Didier Cherix and Sebastian Hellmann and Jens Lehmann},
    Booktitle = {JIST},
    Year = {2012},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2012/SPARQLComponent/jist2012/public.pdf},
    Date-modified = {2012-12-02 13:03:29 +0000},
    Keywords = {2012 group_aksw event_jist group_mole MOLE lehmann hellmann kilt cherix sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page peerdllearner dllearner},
    Owner = {sebastian},
    Timestamp = {2012.10.19},
    Url = {http://svn.aksw.org/papers/2012/SPARQLComponent/jist2012/public.pdf}
    }

  • L. Bühmann and J. Lehmann, “Universal OWL Axiom Enrichment for Large Knowledge Bases,” in Proceedings of EKAW 2012, 2012, pp. 57-71.
    [BibTeX] [Download PDF]
    @InProceedings{Buhmann2012,
    Title = {Universal {OWL} Axiom Enrichment for Large Knowledge Bases},
    Author = {Lorenz B{\"u}hmann and Jens Lehmann},
    Booktitle = {Proceedings of EKAW 2012},
    Year = {2012},
    Pages = {57--71},
    Publisher = {Springer},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2012/ekaw_enrichment.pdf},
    Date-modified = {2012-12-02 13:07:03 +0000},
    Keywords = {2012 group_aksw event_ekaw group_mole mole buehmann lehmann MOLE dllearner ore sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page peer-reviewed},
    Owner = {jl},
    Timestamp = {2012.07.18},
    Url = {http://jens-lehmann.org/files/2012/ekaw_enrichment.pdf}
    }

  • S. Auer, L. Bühmann, C. Dirschl, O. Erling, M. Hausenblas, R. Isele, J. Lehmann, M. Martin, P. N. Mendes, B. van Nuffelen, C. Stadler, S. Tramp, and H. Williams, “Managing the life-cycle of Linked Data with the LOD2 Stack,” in Proceedings of International Semantic Web Conference (ISWC 2012), 2012.
    [BibTeX] [Download PDF]
    @InProceedings{Auer+ISWC-2012,
    Title = {Managing the life-cycle of Linked Data with the {LOD2} Stack},
    Author = {S\"{o}ren Auer and Lorenz B{\"u}hmann and Christian Dirschl and Orri Erling and Michael Hausenblas and Robert Isele and Jens Lehmann and Michael Martin and Pablo N. Mendes and Bert van Nuffelen and Claus Stadler and Sebastian Tramp and Hugh Williams},
    Booktitle = {Proceedings of International Semantic Web Conference (ISWC 2012)},
    Year = {2012},
    Note = {22\% acceptance rate},
    Bdsk-url-1 = {http://svn.aksw.org/lod2/Paper/ISWC2012-InUse_LOD2-Stack/public.pdf},
    Date-modified = {2012-12-02 12:25:29 +0000},
    Keywords = {auer buehmann lehmann tramp martin stadler dllearner group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page 2012 event_ISWC MOLE ES interlinking quality analysis search exploration browsing extraction storage querying manual revision authoring fusing},
    Owner = {soeren},
    Timestamp = {2012.08.14},
    Url = {http://iswc2012.semanticweb.org/sites/default/files/76500001.pdf}
    }

  • J. Demter, S. Auer, M. Martin, and J. Lehmann, “LODStats — An Extensible Framework for High-performance Dataset Analytics,” in Proceedings of the EKAW 2012, 2012.
    [BibTeX] [Download PDF]
    @InProceedings{Demter2012,
    Title = {LODStats -- An Extensible Framework for High-performance Dataset Analytics},
    Author = {Jan Demter and S{\"o}ren Auer and Michael Martin and Jens Lehmann},
    Booktitle = {Proceedings of the EKAW 2012},
    Year = {2012},
    Note = {29% acceptance rate},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science (LNCS) 7603},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2011/RDFStats/public.pdf},
    Date-modified = {2012-12-02 13:05:55 +0000},
    Keywords = {2012 group_aksw event_ekaw martin auer lehmann lodstats sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page peer-reviewed MOLE},
    Owner = {michael},
    Timestamp = {2012.07.10},
    Url = {http://svn.aksw.org/papers/2011/RDFStats/public.pdf}
    }

  • J. Demter, S. Auer, M. Martin, and J. Lehmann, “LODStats—An Extensible Framework for High-performance Dataset Analytics,” in Proceedings of the EKAW 2012, 2012.
    [BibTeX] [Download PDF]
    @InProceedings{demter-2012-ekaw,
    Title = {LODStats---An Extensible Framework for High-performance Dataset Analytics},
    Author = {Jan Demter and S{\"o}ren Auer and Michael Martin and Jens Lehmann},
    Booktitle = {Proceedings of the EKAW 2012},
    Year = {2012},
    Note = {29\% acceptance rate},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science (LNCS) 7603},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2011/RDFStats/public.pdf},
    Date-modified = {2012-12-02 13:05:55 +0000},
    Keywords = {2012 group_aksw event_ekaw martin auer lehmann lodstats sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page peer-reviewed MOLE},
    Owner = {michael},
    Timestamp = {2012.07.10},
    Url = {http://svn.aksw.org/papers/2011/RDFStats/public.pdf}
    }

  • S. Hellmann, J. Lehmann, and S. Auer, “Linked-Data Aware URI Schemes for Referencing Text Fragments,” in EKAW 2012, 2012. doi:doi:10.1007/978-3-642-16438-5_10
    [BibTeX]
    @InProceedings{hellmann-2012-ekaw,
    Title = {Linked-Data Aware URI Schemes for Referencing Text Fragments},
    Author = {Sebastian Hellmann and Jens Lehmann and S{\"o}ren Auer},
    Booktitle = {EKAW 2012},
    Year = {2012},
    Note = {published at EKAW, preprint: http://svn.aksw.org/papers/2012/NIF/EKAW_short_paper/public.pdf},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science (LNCS) 7603},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2012/NIF/EKAW_short_paper/public.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1007/978-3-642-16438-5_10},
    Date-modified = {2012-12-02 12:28:06 +0000},
    Doi = {doi:10.1007/978-3-642-16438-5_10},
    Keywords = {2012 group_aksw event_ekaw hellmann kilt auer lehmann nif sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page peer-reviewed MOLE kilt},
    Owner = {sebastian},
    Timestamp = {2012.06.13}
    }

  • O. Gunes, C. Schallhart, T. Furche, J. Lehmann, and A. N. Ngomo, “EAGER: extending automatically gazetteers for entity recognition,” in Proceedings of the 3rd Workshop on the People’s Web Meets NLP: Collaboratively Constructed Semantic Resources and their Applications to NLP, 2012.
    [BibTeX]
    @InProceedings{eager,
    Title = {{EAGER}: extending automatically gazetteers for entity recognition},
    Author = {Omer Gunes and Christian Schallhart and Tim Furche and Jens Lehmann and Axel-Cyrille Ngonga Ngomo},
    Booktitle = {Proceedings of the 3rd Workshop on the People's Web Meets NLP: Collaboratively Constructed Semantic Resources and their Applications to NLP},
    Year = {2012},
    Keywords = {2012 lehmann MOLE group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page},
    Owner = {jl},
    Timestamp = {2013.01.02}
    }

  • C. Guéret, P. T. Groth, C. Stadler, and J. Lehmann, “Assessing Linked Data Mappings Using Network Measures,” in Proceedings of the 9th Extended Semantic Web Conference, 2012, pp. 87-102.
    [BibTeX] [Download PDF]
    @InProceedings{Gueret2012,
    Title = {Assessing Linked Data Mappings Using Network Measures},
    Author = {Christophe Gu{\'e}ret and Paul T. Groth and Claus Stadler and Jens Lehmann},
    Booktitle = {Proceedings of the 9th Extended Semantic Web Conference},
    Year = {2012},
    Pages = {87--102},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Volume = {7295},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2012/linked_mapping_qa.pdf},
    Date-modified = {2012-12-02 13:05:19 +0000},
    Keywords = {2012 group_aksw group_mole event_ESWC sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page peer-reviewed stadler lehmann MOLE},
    Owner = {jl},
    Timestamp = {2012.07.18},
    Url = {http://jens-lehmann.org/files/2012/linked_mapping_qa.pdf}
    }

  • S. Hellmann, C. Stadler, and J. Lehmann, “The German DBpedia: A Sense Repository for Linking Entities.” , 2012.
    [BibTeX]
    @InCollection{Hellmann2012GermanDBpedia,
    Title = {The German {DBpedia}: A Sense Repository for Linking Entities},
    Author = {Hellmann, Sebastian and Stadler, Claus and Lehmann, Jens},
    Year = {2012},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2012/german_dbpedia_sense_repository.pdf},
    Crossref = {Springer-ldl},
    Date-modified = {2012-12-02 13:03:21 +0000},
    Keywords = {2012 hellmann kilt stadler lehmann group_aksw MOLE nif sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page kilt},
    Owner = {jl},
    Timestamp = {2012.07.18}
    }

2011

  • M. Morsey, J. Lehmann, S. Auer, and A. Ngonga Ngomo, “DBpedia SPARQL Benchmark — Performance Assessment with Real Queries on Real Data,” in ISWC 2011, 2011.
    [BibTeX] [Download PDF]
    @InProceedings{Morsey2011,
    Title = {{DB}pedia {SPARQL} {B}enchmark -- {P}erformance {A}ssessment with {R}eal {Q}ueries on {R}eal {D}ata},
    Author = {Mohamed Morsey and Jens Lehmann and S{\"o}ren Auer and Axel-Cyrille {Ngonga Ngomo}},
    Booktitle = {ISWC 2011},
    Year = {2011},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2011/dbpsb.pdf},
    Date-modified = {2012-12-02 13:10:49 +0000},
    Keywords = {2011 group_aksw SIMBA sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 morsey ngonga lehmann auer MOLE dbpsb limes},
    Owner = {ngonga},
    Timestamp = {2011.08.24},
    Url = {http://jens-lehmann.org/files/2011/dbpsb.pdf}
    }

  • M. Morsey, J. Lehmann, S. Auer, and A. Ngonga Ngomo, “DBpedia SPARQL Benchmark—Performance Assessment with Real Queries on Real Data,” in ISWC 2011, 2011.
    [BibTeX] [Download PDF]
    @InProceedings{MOR+11,
    Title = {{DB}pedia {SPARQL} {B}enchmark---{P}erformance {A}ssessment with {R}eal {Q}ueries on {R}eal {D}ata},
    Author = {Mohamed Morsey and Jens Lehmann and S{\"o}ren Auer and Axel-Cyrille {Ngonga Ngomo}},
    Booktitle = {ISWC 2011},
    Year = {2011},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2011/dbpsb.pdf},
    Date-modified = {2012-12-02 13:10:49 +0000},
    Keywords = {2011 group_aksw SIMBA sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 morsey ngonga lehmann auer MOLE dbpsb limes},
    Owner = {ngonga},
    Timestamp = {2011.08.24},
    Url = {http://jens-lehmann.org/files/2011/dbpsb.pdf}
    }

  • A. Ngonga Ngomo, J. Lehmann, S. Auer, and K. Höffner, “RAVEN: Active Learning of Link Specifications,” in Proceedings of the Ontology Matching Workshop (co-located with ISWC), 2011.
    [BibTeX] [Download PDF]
    @InProceedings{NGO+11a,
    Title = {{RAVEN}: {A}ctive Learning of Link Specifications},
    Author = {Axel-Cyrille {Ngonga Ngomo} and Jens Lehmann and S{\"o}ren Auer and Konrad H{\"o}ffner},
    Booktitle = {Proceedings of the Ontology Matching Workshop (co-located with ISWC)},
    Year = {2011},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2011/raven.pdf},
    Date-modified = {2012-12-02 13:10:38 +0000},
    Keywords = {2011 group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis ngonga lehmann auer hoeffner scms},
    Owner = {ngonga},
    Timestamp = {2011.09.12},
    Url = {http://jens-lehmann.org/files/2011/raven.pdf}
    }

  • A. Ngonga Ngomo, J. Lehmann, S. Auer, and K. Höffner, “RAVEN: Active Learning of Link Specifications,” in Proceedings of the Ontology Matching Workshop (co-located with ISWC), 2011.
    [BibTeX] [Download PDF]
    @InProceedings{NgongaNgomo2011,
    Title = {RAVEN: Active Learning of Link Specifications},
    Author = {Axel-Cyrille {Ngonga Ngomo} and Jens Lehmann and S{\"o}ren Auer and Konrad H{\"o}ffner},
    Booktitle = {Proceedings of the Ontology Matching Workshop (co-located with ISWC)},
    Year = {2011},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2011/raven.pdf},
    Date-modified = {2012-12-02 13:10:38 +0000},
    Keywords = {2011 group_aksw SIMBA MOLE sys:relevantFor:infai sys:relevantFor:bis ngonga lehmann auer hoeffner scms},
    Owner = {ngonga},
    Timestamp = {2011.09.12},
    Url = {http://jens-lehmann.org/files/2011/raven.pdf}
    }

  • A. Zaveri, R. Pietrobon, S. Auer, J. Lehmann, M. Martin, and T. Ermilov, “ReDD-Observatory: Using the Web of Data for Evaluating the Research-Disease Disparity,” in Proc. of the IEEE/WIC/ACM International Conference on Web Intelligence, 2011.
    [BibTeX] [Download PDF]
    @InProceedings{zaveri2011-icwi,
    Title = {ReDD-Observatory: Using the Web of Data for Evaluating the Research-Disease Disparity},
    Author = {Amrapali Zaveri and Ricardo Pietrobon and S{\"o}ren Auer and Jens Lehmann and Michael Martin and Timofey Ermilov},
    Booktitle = {Proc. of the IEEE/WIC/ACM International Conference on Web Intelligence},
    Year = {2011},
    Bdsk-url-1 = {http://liris.cnrs.fr/~wi-iat11/WI_2011/accepted-papers/},
    Date-modified = {2013-06-04 19:27:28 +0000},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 2011 martin zaveri pietrobon ermilov lehmann auer redd MOLE cubeviz},
    Owner = {michael},
    Timestamp = {2011.06.20},
    Url = {http://liris.cnrs.fr/~wi-iat11/WI_2011/accepted-papers/}
    }

  • J. Lehmann and L. Bühmann, “AutoSPARQL: Let Users Query Your Knowledge Base,” in Proceedings of ESWC 2011, 2011.
    [BibTeX] [Download PDF]
    @InProceedings{lehmann2011,
    Title = {{AutoSPARQL}: Let Users Query Your Knowledge Base},
    Author = {Jens Lehmann and Lorenz B{\"u}hmann},
    Booktitle = {Proceedings of ESWC 2011},
    Year = {2011},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2011/autosparql_eswc.pdf},
    Date-modified = {2012-12-02 12:24:52 +0000},
    Keywords = {2011 group_aksw MOLE event_eswc lehmann buehmann sys:relevantFor:infai dllearner sys:relevantFor:bis sys:relevantFor:lod2 lod2page autosparql peerdllearner dllearner},
    Owner = {jl},
    Timestamp = {2011.03.22},
    Url = {http://jens-lehmann.org/files/2011/autosparql_eswc.pdf}
    }

  • J. Lehmann, S. Auer, L. Bühmann, and S. Tramp, “Class expression learning for ontology engineering,” Journal of Web Semantics, vol. 9, pp. 71-81, 2011.
    [BibTeX] [Download PDF]
    @Article{celoe,
    Title = {Class expression learning for ontology engineering},
    Author = {Jens Lehmann and S{\"o}ren Auer and Lorenz B{\"u}hmann and Sebastian Tramp},
    Journal = {Journal of Web Semantics},
    Year = {2011},
    Pages = {71 - 81},
    Volume = {9},
    Address = {Amsterdam, The Netherlands, The Netherlands},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2011/celoe.pdf},
    Date-modified = {2012-12-02 12:25:06 +0000},
    Keywords = {2011 group_aksw tramp lehmann buehmann auer seebiproject_OntoWiki dllearner sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page peer-reviewed MOLE},
    Owner = {jl},
    Publisher = {Elsevier Science Publishers B. V.},
    Timestamp = {2011.02.18},
    Url = {http://jens-lehmann.org/files/2011/celoe.pdf}
    }

  • S. Auer, J. Lehmann, and A. Ngonga Ngomo, “Introduction to Linked Data and Its Lifecycle on the Web,” in Reasoning Web. Semantic Technologies for the Web of Data, A. Polleres, C. d’Amato, M. Arenas, S. Handschuh, P. Kroner, S. Ossowski, and P. Patel-Schneider, Eds., Springer Berlin Heidelberg, 2011, vol. 6848, pp. 1-75. doi:10.1007/978-3-642-23032-5_1
    [BibTeX] [Download PDF]
    @InCollection{AUE+11,
    Title = {Introduction to Linked Data and Its Lifecycle on the Web},
    Author = {Auer, S{\"o}ren and Lehmann, Jens and Ngonga Ngomo, Axel-Cyrille},
    Booktitle = {Reasoning Web. Semantic Technologies for the Web of Data},
    Publisher = {Springer Berlin Heidelberg},
    Year = {2011},
    Editor = {Polleres, Axel and d'Amato, Claudia and Arenas, Marcelo and Handschuh, Siegfried and Kroner, Paula and Ossowski, Sascha and Patel-Schneider, Peter},
    Pages = {1-75},
    Series = {Lecture Notes in Computer Science},
    Volume = {6848},
    Bdsk-url-1 = {http://dx.doi.org/10.1007/978-3-642-23032-5_1},
    Doi = {10.1007/978-3-642-23032-5_1},
    ISBN = {978-3-642-23031-8},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis SIMBA ngonga auer lehmann MOLE 2011},
    Url = {http://dx.doi.org/10.1007/978-3-642-23032-5_1}
    }

  • C. Guéret, P. Groth, C. Stadler, and J. Lehmann, “Linked Data Quality Assessment through Network Analysis,” in ISWC 2011 Posters and Demos, 2011.
    [BibTeX] [Download PDF]
    @InProceedings{iswc-11-pd-linkqa,
    Title = {Linked Data Quality Assessment through Network Analysis},
    Author = {Christophe Gu{\'e}ret and Paul Groth and Claus Stadler and Jens Lehmann},
    Booktitle = {ISWC 2011 Posters and Demos},
    Year = {2011},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2011/iswc_pd_linkqa.pdf},
    Keywords = {2012 lehmann MOLE group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page stadler},
    Owner = {jl},
    Timestamp = {2013.01.02},
    Url = {http://jens-lehmann.org/files/2011/iswc_pd_linkqa.pdf}
    }

  • S. Hellmann, J. Lehmann, and S. Auer, “Learning of OWL Class Expressions on Very Large Knowledge Bases and its Applications.,” in Learning of OWL Class Expressions on Very Large Knowledge Bases and its Applications, I. Semantic Services and W. A. E. Concepts, Eds., IGI Global, 2011, pp. 104-130. doi:doi:10.4018/978-1-60960-593-3
    [BibTeX]
    @InCollection{sh_scalability_2011,
    Title = {Learning of {OWL} Class Expressions on Very Large Knowledge Bases and its Applications.},
    Author = {Sebastian Hellmann and Jens Lehmann and S{\"o}ren Auer},
    Booktitle = {Learning of OWL Class Expressions on Very Large Knowledge Bases and its Applications},
    Publisher = {IGI Global},
    Year = {2011},
    Chapter = {5},
    Editor = {Semantic Services, Interoperability and Web Applications: Emerging Concepts},
    Pages = {104-130},
    Bdsk-url-1 = {http://dx.doi.org/10.4018/978-1-60960-593-3},
    Date-modified = {2012-12-02 12:59:52 +0000},
    Doi = {doi:10.4018/978-1-60960-593-3},
    Keywords = {peer-reviewed 2011 hellmann kilt lehmann auer group_aksw MOLE sys:relevantFor:infai sys:relevantFor:bis dllearner},
    Owner = {sebastian},
    Timestamp = {2011.06.27}
    }

  • J. Iglesias and J. Lehmann, “Towards Integrating Fuzzy Logic Capabilities into an Ontology-based Inductive Logic Programming Framework,” in Proc. of the 11th International Conference on Intelligent Systems Design and Applications (ISDA), 2011.
    [BibTeX] [Download PDF]
    @InProceedings{iglesias-j-2011--a,
    Title = {Towards Integrating Fuzzy Logic Capabilities into an Ontology-based Inductive Logic Programming Framework},
    Author = {Josu{\'e} Iglesias and Jens Lehmann},
    Booktitle = {Proc. of the 11th International Conference on Intelligent Systems Design and Applications (ISDA)},
    Year = {2011},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2011/dllearner_fuzzy.pdf},
    Date-modified = {2012-12-02 13:03:12 +0000},
    Keywords = {2011 group_aksw lehmann MOLE sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page dllearner},
    Owner = {jl},
    Timestamp = {2011.08.23},
    Url = {http://jens-lehmann.org/files/2011/dllearner_fuzzy.pdf}
    }

  • S. Auer, J. Lehmann, and A. N. Ngomo, “Introduction to Linked Data and Its Lifecycle on the Web,” in Reasoning Web, 2011, pp. 1-75.
    [BibTeX]
    @InProceedings{Auer2011a,
    Title = {Introduction to Linked Data and Its Lifecycle on the Web},
    Author = {S{\"o}ren Auer and Jens Lehmann and Axel-Cyrille Ngonga Ngomo},
    Booktitle = {Reasoning Web},
    Year = {2011},
    Pages = {1--75},
    Date-modified = {2012-12-02 13:07:41 +0000},
    Keywords = {group_aksw sys:relevantFor:infai sys:relevantFor:bis SIMBA ngonga auer lehmann MOLE 2011},
    Timestamp = {2017.10.12}
    }

2010

  • J. Lehmann and L. Bühmann, “ORE – A Tool for Repairing and Enriching Knowledge Bases,” in Proceedings of the 9th International Semantic Web Conference (ISWC2010), 2010, pp. 177-193. doi:doi:10.1007/978-3-642-17749-1_12
    [BibTeX] [Download PDF]
    @InProceedings{lehmann-2010-iswc,
    Title = {{ORE} - A Tool for Repairing and Enriching Knowledge Bases},
    Author = {Jens Lehmann and Lorenz B{\"u}hmann},
    Booktitle = {Proceedings of the 9th International Semantic Web Conference (ISWC2010)},
    Year = {2010},
    Pages = {177--193},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2010/ORE/public.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1007/978-3-642-17749-1_12},
    Date-modified = {2012-12-02 13:02:02 +0000},
    Doi = {doi:10.1007/978-3-642-17749-1_12},
    Keywords = {2010 event_iswc group_aksw MOLE lehmann buehmann sys:relevantFor:infai dllearner ore sys:relevantFor:bis sys:relevantFor:lod2 lod2page peer-reviewed ontowiki_eu},
    Owner = {seebi},
    Timestamp = {2010.08.30},
    Url = {http://svn.aksw.org/papers/2010/ORE/public.pdf}
    }

  • J. Lehmann and P. Hitzler, “Concept Learning in Description Logics Using Refinement Operators,” Machine Learning journal, vol. 78, iss. 1-2, pp. 203-250, 2010. doi:doi:10.1007/s10994-009-5146-2
    [BibTeX] [Download PDF]
    @Article{mlj,
    Title = {Concept Learning in Description Logics Using Refinement Operators},
    Author = {Jens Lehmann and Pascal Hitzler},
    Journal = {Machine Learning journal},
    Year = {2010},
    Number = {1-2},
    Pages = {203--250},
    Volume = {78},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2010/concept_learning_mlj.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1007/s10994-009-5146-2},
    Biburl = {http://www.bibsonomy.org/bibtex/211d55726073b12be444245965f12d57b/jens},
    Date-modified = {2012-12-02 13:01:18 +0000},
    Description = {publications Jens Lehmann},
    Doi = {doi:10.1007/s10994-009-5146-2},
    Keywords = {2010 event_mlj group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis peer-reviewed ontowiki_eu lehmann},
    Publisher = {Springer},
    Url = {http://jens-lehmann.org/files/2010/concept_learning_mlj.pdf}
    }

  • C. Stadler, M. Martin, J. Lehmann, and S. Hellmann, “Update Strategies for DBpedia Live,” in 6th Workshop on Scripting and Development for the Semantic Web Colocated with ESWC 2010 30th or 31st May, 2010 Crete, Greece, 2010.
    [BibTeX] [Abstract] [Download PDF]
    Wikipedia is one of the largest public information spaces with a huge user community, which collaboratively works on the largest online encyclopedia. Their users add or edit up to 150 thousand wiki pages per day. The DBpedia project extracts RDF from Wikipedia and interlinks it with other knowledge bases. In the DBpedia live extraction mode, Wikipedia edits are instantly processed to update information in DBpedia. Due to the high number of edits and the growth of Wikipedia, the update process has to be very efficient and scalable. In this paper, we present different strategies to tackle this challenging problem and describe how we modified the DBpedia live extraction algorithm to work more efficiently.

    @InProceedings{stadler-c-2010--a,
    Title = {{U}pdate {S}trategies for {DB}pedia {L}ive},
    Author = {Claus Stadler and Michael Martin and Jens Lehmann and Sebastian Hellmann},
    Booktitle = {6th Workshop on Scripting and Development for the Semantic Web Colocated with ESWC 2010 30th or 31st May, 2010 Crete, Greece},
    Year = {2010},
    Abstract = {Wikipedia is one of the largest public information spaces with a huge user community, which collaboratively works on the largest online encyclopedia. Their users add or edit up to 150 thousand wiki pages per day. The DBpedia project extracts RDF from Wikipedia and interlinks it with other knowledge bases. In the DBpedia live extraction mode, Wikipedia edits are instantly processed to update information in DBpedia. Due to the high number of edits and the growth of Wikipedia, the update process has to be very efficient and scalable. In this paper, we present different strategies to tackle this challenging problem and describe how we modified the DBpedia live extraction algorithm to work more efficiently.},
    Bdsk-url-1 = {http://www.semanticscripting.org/SFSW2010/papers/sfsw2010_submission_5.pdf},
    Date-modified = {2012-12-02 12:30:10 +0000},
    Keywords = {sys:relevantFor:bis sys:relevantFor:infai martin stadler hellmann kilt lehmann dbpedia event_sfsw 2010 group_aksw peer-reviewed ontowiki_eu MOLE},
    Owner = {michael},
    Timestamp = {2010.06.24},
    Url = {http://www.semanticscripting.org/SFSW2010/papers/sfsw2010_submission_5.pdf}
    }

  • J. Lehmann, “Learning OWL Class Expressions,” PhD Thesis, 2010.
    [BibTeX] [Download PDF]
    @PhdThesis{jl_2010_phd_thesis,
    Title = {Learning {OWL} Class Expressions},
    Author = {Jens Lehmann},
    School = {University of Leipzig},
    Year = {2010},
    Note = {PhD in Computer Science},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2010/phd_thesis.pdf},
    Date-modified = {2012-12-02 13:02:54 +0000},
    Keywords = {2010 group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://jens-lehmann.org/files/2010/phd_thesis.pdf}
    }

  • J. Lehmann, Learning OWL Class Expressions, P. Hitzler, Ed., AKA Heidelberg, 2010, vol. 6.
    [BibTeX]
    @Book{jl_2010_learningowlclassexpressions,
    Title = {Learning OWL Class Expressions},
    Author = {Jens Lehmann},
    Editor = {Pascal Hitzler},
    Publisher = {AKA Heidelberg},
    Year = {2010},
    Note = {ISBN 978-3-89838-336-3.2010},
    Series = {Studies on the Semantic Web},
    Volume = {6},
    Date-modified = {2012-12-02 13:03:05 +0000},
    Keywords = {2010 group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Pages = {265}
    }

  • S. Auer and J. Lehmann, “Making the Web a Data Washing Machine—Creating Knowledge out of Interlinked Data,” Semantic Web Journal, 2010.
    [BibTeX] [Download PDF]
    @Article{auer-swj-2010,
    Title = {Making the Web a Data Washing Machine---Creating Knowledge out of Interlinked Data},
    Author = {S{\"o}ren Auer and Jens Lehmann},
    Journal = {Semantic Web Journal},
    Year = {2010},
    Bdsk-url-1 = {http://www.jens-lehmann.org/files/2010/washing_machine_swj.pdf},
    Date-modified = {2012-12-02 13:07:21 +0000},
    Keywords = {2010 group_aksw auer lehmann MOLE event_swj sys:relevantFor:infai sys:relevantFor:bis seebiproject_OntoWiki peer-reviewed ontowiki_eu dllearner},
    Timestamp = {2010.01.17},
    Url = {http://www.jens-lehmann.org/files/2010/washing_machine_swj.pdf}
    }

  • S. Auer, M. Weidl, J. Lehmann, A. J. Zaveri, and K. Choi, “I18n of Semantic Web Applications,” in Proceedings of the 9th International Semantic Web Conference (ISWC2010), Berlin / Heidelberg, 2010. doi:doi:10.1007/978-3-642-17749-1_1
    [BibTeX] [Download PDF]
    @InProceedings{auer-s-2010-iswc,
    Title = {I18n of Semantic Web Applications},
    Author = {S{\"o}ren Auer and Matthias Weidl and Jens Lehmann and Amrapali J. Zaveri and Key-Sun Choi},
    Booktitle = {Proceedings of the 9th International Semantic Web Conference (ISWC2010)},
    Year = {2010},
    Address = {Berlin / Heidelberg},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2010/ISWC_I18n/public.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1007/978-3-642-17749-1_1},
    Date-modified = {2012-12-02 12:57:00 +0000},
    Doi = {doi:10.1007/978-3-642-17749-1_1},
    Keywords = {2010 event_iswc group_aksw auer lehmann zaveri sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page ontowiki_eu peer-reviewed MOLE},
    Owner = {seebi},
    Timestamp = {2010.08.21},
    Url = {http://svn.aksw.org/papers/2010/ISWC_I18n/public.pdf}
    }

  • S. Hellmann, J. Unbehauen, and J. Lehmann, “HANNE – A Holistic Application for Navigational Knowledge Engineering,” in Posters and Demos of ISWC 2010, 2010.
    [BibTeX] [Download PDF]
    @InProceedings{hanne,
    Title = {HANNE - A Holistic Application for Navigational Knowledge Engineering},
    Author = {Sebastian Hellmann and J{\"o}rg Unbehauen and Jens Lehmann},
    Booktitle = {Posters and Demos of ISWC 2010},
    Year = {2010},
    Bdsk-url-1 = {http://iswc2010.semanticweb.org/pdf/522.pdf},
    Date-modified = {2012-12-05 07:35:26 +0000},
    Keywords = {2010 group_aksw event_iswc hellmann kilt unbehauen lehmann MOLE sys:relevantFor:infai sys:relevantFor:bis dllearner nke},
    Url = {http://iswc2010.semanticweb.org/pdf/522.pdf}
    }

  • S. Auer and J. Lehmann, “Making the Web a Data Washing Machine – Creating Knowledge out of Interlinked Data,” Semantic Web Journal, 2010.
    [BibTeX] [Download PDF]
    @Article{Auer2010a,
    Title = {Making the Web a Data Washing Machine - Creating Knowledge out of Interlinked Data},
    Author = {S{\"o}ren Auer and Jens Lehmann},
    Journal = {Semantic Web Journal},
    Year = {2010},
    Bdsk-url-1 = {http://www.jens-lehmann.org/files/2010/washing_machine_swj.pdf},
    Date-modified = {2012-12-02 13:07:21 +0000},
    Keywords = {2010 group_aksw auer lehmann MOLE event_swj sys:relevantFor:infai sys:relevantFor:bis seebiproject_OntoWiki peer-reviewed ontowiki_eu dllearner},
    Timestamp = {2010.01.17},
    Url = {http://www.jens-lehmann.org/files/2010/washing_machine_swj.pdf}
    }

2009

  • J. Lehmann, “DL-Learner: Learning Concepts in Description Logics,” Journal of Machine Learning Research (JMLR), vol. 10, pp. 2639-2642, 2009.
    [BibTeX] [Download PDF]
    @Article{dllearner_jmlr,
    Title = {{DL-Learner:} Learning Concepts in Description Logics},
    Author = {Jens Lehmann},
    Journal = {Journal of Machine Learning Research (JMLR)},
    Year = {2009},
    Pages = {2639--2642},
    Volume = {10},
    Bdsk-url-1 = {http://www.jmlr.org/papers/volume10/lehmann09a/lehmann09a.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/2d863a06ce17d41d0e6695f0edac74f5f/jens},
    Date-modified = {2012-12-02 13:05:46 +0000},
    Description = {publications Jens Lehmann},
    Keywords = {2009 group_aksw dllearner MOLE event_jmlr sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://www.jmlr.org/papers/volume10/lehmann09a/lehmann09a.pdf}
    }

  • J. Lehmann, C. Bizer, G. Kobilarov, S. Auer, C. Becker, R. Cyganiak, and S. Hellmann, “DBpedia – A Crystallization Point for the Web of Data,” Journal of Web Semantics, vol. 7, iss. 3, pp. 154-165, 2009. doi:doi:10.1016/j.websem.2009.07.002
    [BibTeX] [Download PDF]
    @Article{dbpedia_jws_09,
    Title = {{DB}pedia - A Crystallization Point for the Web of Data},
    Author = {Jens Lehmann and Chris Bizer and Georgi Kobilarov and S{\"o}ren Auer and Christian Becker and Richard Cyganiak and Sebastian Hellmann},
    Journal = {Journal of Web Semantics},
    Year = {2009},
    Number = {3},
    Pages = {154--165},
    Volume = {7},
    Address = {Amsterdam, The Netherlands, The Netherlands},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2009/dbpedia_jws.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1016/j.websem.2009.07.002},
    Biburl = {http://www.bibsonomy.org/bibtex/2cd252e88108ce7f1635d6b0c9dcb5ae2/jens},
    Date-modified = {2012-12-02 13:06:17 +0000},
    Description = {publications Jens Lehmann},
    Doi = {doi:10.1016/j.websem.2009.07.002},
    Keywords = {2009 event_jws group_aksw dbpedia MOLE lehmann auer hellmann kilt sys:relevantFor:infai sys:relevantFor:bis peer-reviewed ontowiki_eu},
    Publisher = {Elsevier Science Publishers B. V.},
    Url = {http://jens-lehmann.org/files/2009/dbpedia_jws.pdf}
    }

  • J. Lehmann and C. Haase, “Ideal Downward Refinement in the EL Description Logic,” in Inductive Logic Programming, 19th International Conference, ILP 2009, Leuven, Belgium, 2009.
    [BibTeX] [Download PDF]
    @InProceedings{el_operator,
    Title = {Ideal Downward Refinement in the {EL} Description Logic},
    Author = {Jens Lehmann and Christoph Haase},
    Booktitle = {Inductive Logic Programming, 19th International Conference, ILP 2009, Leuven, Belgium},
    Year = {2009},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2009/el_ilp.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/24dff6b9b0c73296aa64d12369dffa783/jens},
    Date-modified = {2012-12-02 13:05:37 +0000},
    Description = {publications Jens Lehmann},
    Keywords = {2009 event_ilp group_aksw dllearner mole sys:relevantFor:infai sys:relevantFor:bis lehmann MOLE},
    Url = {http://jens-lehmann.org/files/2009/el_ilp.pdf}
    }

  • J. Lehmann and C. Haase, “Ideal Downward Refinement in the EL Description Logic,” University of Leipzig 2009.
    [BibTeX] [Download PDF]
    @TechReport{el_operator_techreport,
    Title = {Ideal Downward Refinement in the {EL} Description Logic},
    Author = {Jens Lehmann and Christoph Haase},
    Institution = {University of Leipzig},
    Year = {2009},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2009_ideal_operator_el_tr.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/2587ecc679dc1c513093512cdef151d04/jens},
    Date-modified = {2012-12-02 13:05:29 +0000},
    Description = {publications Jens Lehmann},
    Keywords = {2009 group_aksw dllearner mole sys:relevantFor:infai sys:relevantFor:bis lehmann MOLE},
    Url = {http://jens-lehmann.org/files/2009_ideal_operator_el_tr.pdf}
    }

  • S. Auer, S. Dietzold, J. Lehmann, S. Hellmann, and D. Aumueller, “Triplify: Light-weight linked data publication from relational databases,” in Proceedings of the 18th International Conference on World Wide Web, WWW 2009, Madrid, Spain, April 20-24, 2009, 2009, pp. 621-630. doi:doi:10.1145/1526709.1526793
    [BibTeX] [Download PDF]
    @InProceedings{triplify_www,
    Title = {Triplify: Light-weight linked data publication from relational databases},
    Author = {S{\"o}ren Auer and Sebastian Dietzold and Jens Lehmann and Sebastian Hellmann and David Aumueller},
    Booktitle = {Proceedings of the 18th International Conference on World Wide Web, {WWW} 2009, Madrid, Spain, April 20-24, 2009},
    Year = {2009},
    Editor = {Juan Quemada and Gonzalo Le{\'o}n and Yo{\"e}lle S. Maarek and Wolfgang Nejdl},
    Pages = {621--630},
    Publisher = {ACM},
    Bdsk-url-1 = {http://doi.acm.org/10.1145/1526709.1526793},
    Bdsk-url-2 = {http://dx.doi.org/10.1145/1526709.1526793},
    Bibdate = {2009-05-05},
    Bibsource = {DBLP, http://dblp.uni-trier.de/db/conf/www/www2009.html#AuerDLHA09},
    Biburl = {http://www.bibsonomy.org/bibtex/2154a270c13c0f525a2f1ce82bb8aad32/jens},
    Date-modified = {2012-12-02 12:30:19 +0000},
    Description = {publications Jens Lehmann},
    Doi = {doi:10.1145/1526709.1526793},
    ISBN = {978-1-60558-487-4},
    Keywords = {2009 event_www group_aksw auer tramp lehmann hellmann kilt aumueller sys:relevantFor:infai sys:relevantFor:bis triplify seebiproject_Triplify ontowiki_eu peer-reviewed triplify MOLE},
    Url = {http://doi.acm.org/10.1145/1526709.1526793}
    }

  • S. Hellmann, C. Stadler, J. Lehmann, and S. Auer, “DBpedia Live Extraction,” in Proc. of 8th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE), 2009, pp. 1209-1223. doi:doi:10.1007/978-3-642-05151-7_33
    [BibTeX] [Download PDF]
    @InProceedings{hellmann_odbase_dbpedia_live_09,
    Title = {{DBpedia} Live Extraction},
    Author = {Sebastian Hellmann and Claus Stadler and Jens Lehmann and S{\"o}ren Auer},
    Booktitle = {Proc. of 8th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE)},
    Year = {2009},
    Pages = {1209--1223},
    Series = {Lecture Notes in Computer Science},
    Volume = {5871},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2009/ODBASE_LiveExtraction/dbpedia_live_extraction_public.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1007/978-3-642-05151-7_33},
    Biburl = {http://www.bibsonomy.org/bibtex/27f9a95be2bad73ea9ec9e7e48af4c9ec/jens},
    Date-modified = {2012-12-02 13:04:11 +0000},
    Description = {publications Jens Lehmann},
    Doi = {doi:10.1007/978-3-642-05151-7_33},
    Keywords = {2009 event_odbase hellmann kilt stadler lehmann auer group_aksw dbpedia MOLE sys:relevantFor:infai sys:relevantFor:bis peer-reviewed ontowiki_eu},
    Url = {http://svn.aksw.org/papers/2009/ODBASE_LiveExtraction/dbpedia_live_extraction_public.pdf}
    }

  • S. Auer, J. Lehmann, and C. Bizer, “Semantische Mashups auf Basis Vernetzter Daten,” in Social Semantic Web, A. Blumauer and T. Pellegrini, Eds., Springer, 2009, pp. 259-286.
    [BibTeX] [Download PDF]
    @InCollection{ssw_beitrag,
    Title = {Semantische Mashups auf Basis Vernetzter Daten},
    Author = {S{\"o}ren Auer and Jens Lehmann and Chris Bizer},
    Booktitle = {Social Semantic Web},
    Publisher = {Springer},
    Year = {2009},
    Editor = {Andreas Blumauer and Tassilo Pellegrini},
    Pages = {259--286},
    Series = {X.media.press},
    Bdsk-url-1 = {http://dx.doi.org/10.1007/978-3-540-72216-8_14},
    Bibdate = {2008-11-09},
    Bibsource = {DBLP, http://dblp.uni-trier.de/db/series/xmedia/social2009.html#AuerLB09},
    Biburl = {http://www.bibsonomy.org/bibtex/24f35f63cdf457b91aaa9f720a0b2d072/jens},
    Date-modified = {2012-12-02 12:59:26 +0000},
    Description = {publications Jens Lehmann},
    ISBN = {978-3-540-72215-1},
    Keywords = {language_deutsch 2009 group_aksw ontowiki dbpedia dllearner sys:relevantFor:infai sys:relevantFor:bis lehmann auer},
    Url = {http://dx.doi.org/10.1007/978-3-540-72216-8_14}
    }

  • S. Auer, J. Lehmann, and S. Hellmann, “LinkedGeoData – Adding a Spatial Dimension to the Web of Data,” in Proc. of 8th International Semantic Web Conference (ISWC), 2009. doi:doi:10.1007/978-3-642-04930-9_46
    [BibTeX] [Download PDF]
    @InProceedings{linkedgeodata,
    Title = {{LinkedGeoData} - Adding a Spatial Dimension to the Web of Data},
    Author = {S{\"o}ren Auer and Jens Lehmann and Sebastian Hellmann},
    Booktitle = {Proc. of 8th International Semantic Web Conference (ISWC)},
    Year = {2009},
    Bdsk-url-1 = {http://www.informatik.uni-leipzig.de/~auer/publication/linkedgeodata.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.1007/978-3-642-04930-9_46},
    Biburl = {http://www.bibsonomy.org/bibtex/2d7b66f26a99547a3c117570aca80e4a0/jens},
    Date-modified = {2012-12-02 13:01:44 +0000},
    Description = {publications Jens Lehmann},
    Doi = {doi:10.1007/978-3-642-04930-9_46},
    Keywords = {2009 event_iswc group_aksw linkedgeodata mole hellmann kilt sys:relevantFor:infai sys:relevantFor:bis peer-reviewed ontowiki_eu lgd lehmann auer MOLE},
    Url = {http://www.informatik.uni-leipzig.de/~auer/publication/linkedgeodata.pdf}
    }

  • P. Heim, S. Hellmann, J. Lehmann, S. Lohmann, and T. Stegemann, “RelFinder: Revealing Relationships in RDF Knowledge Bases,” in Proceedings of the 3rd International Conference on Semantic and Media Technologies (SAMT), 2009, pp. 182-187.
    [BibTeX] [Download PDF]
    @InProceedings{2009_relfinder,
    Title = {{RelFinder}: Revealing Relationships in {RDF} Knowledge Bases},
    Author = {Philipp Heim and Sebastian Hellmann and Jens Lehmann and Steffen Lohmann and Timo Stegemann},
    Booktitle = {Proceedings of the 3rd International Conference on Semantic and Media Technologies (SAMT)},
    Year = {2009},
    Pages = {182--187},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Volume = {5887},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2009/relfinder_samt.pdf},
    Date-modified = {2012-12-02 12:26:30 +0000},
    Keywords = {2009 group_aksw event_SAMT sys:relevantFor:infai sys:relevantFor:bis lehmann hellmann kilt MOLE},
    Url = {http://jens-lehmann.org/files/2009/relfinder_samt.pdf}
    }

  • S. Hellmann, J. Lehmann, and S. Auer, “Learning of OWL Class Descriptions on Very Large Knowledge Bases,” International Journal on Semantic Web and Information Systems, vol. 5, iss. 2, pp. 25-48, 2009. doi:doi:10.4018/jswis.2009040102
    [BibTeX] [Download PDF]
    @Article{hellmann_ijswis_09,
    Title = {Learning of {OWL} Class Descriptions on Very Large Knowledge Bases},
    Author = {Sebastian Hellmann and Jens Lehmann and S{\"o}ren Auer},
    Journal = {International Journal on Semantic Web and Information Systems},
    Year = {2009},
    Number = {2},
    Pages = {25--48},
    Volume = {5},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2009_dllearner_sparql.pdf},
    Bdsk-url-2 = {http://dx.doi.org/10.4018/jswis.2009040102},
    Biburl = {http://www.bibsonomy.org/bibtex/2ede309d0985a6ce5a8c75d3c5a2df30e/jens},
    Date-modified = {2012-12-02 13:04:19 +0000},
    Description = {publications Jens Lehmann},
    Doi = {doi:10.4018/jswis.2009040102},
    Keywords = {2009 group_aksw dllearner MOLE hellmann kilt lehmann auer event_IJSWIS sys:relevantFor:infai sys:relevantFor:bis peer-reviewed ontowiki_eu},
    Url = {http://jens-lehmann.org/files/2009_dllearner_sparql.pdf}
    }

  • G. Kobilarov, C. Bizer, S. Auer, and J. Lehmann, “DBpedia – A Linked Data Hub and Data Source for Web Applications and Enterprises,” in Proceedings of Developers Track of 18th International World Wide Web Conference (WWW 2009), April 20th-24th, Madrid, Spain, 2009.
    [BibTeX] [Abstract] [Download PDF]
    The DBpedia project provides Linked Data identifiers for currently 2.6 million things and serves a large knowledge base of structured information. DBpedia developed into the central interlinking hub for the Linking Open Data project, its URIs are used within named entity recognition services such as OpenCalais and annotation services such as Faviki, and the BBC started using DBpedia as their central semantic backbone. DBpedia’s structured data serves as background information in the process interlinking datasets and provides a rich source of information for application developers. Beside making the DBpedia knowledge base available as linked data and RDF dumps, we offer a Lookup Service which can be used by applications to discover URIs for identifying concepts, and a SPARQL endpoint that can be retrieve data from the DBpedia knowledge base to be used in applications. This talk will give an introduction to DBpedia for web developers and an overview of DBpedia’s development over the last year. We will demonstrate how DBpedia URIs are used for document annotation and how Web applications can via DBpedia facilitate Wikipedia as a source of structured knowledge.

    @InProceedings{kobilarov-?-2009--,
    Title = {{DBpedia} - A Linked Data Hub and Data Source for Web Applications and Enterprises},
    Author = {Georgi Kobilarov and Christian Bizer and S{\"o}ren Auer and Jens Lehmann},
    Booktitle = {Proceedings of Developers Track of 18th International World Wide Web Conference (WWW 2009), April 20th-24th, Madrid, Spain},
    Year = {2009},
    Month = {April},
    Abstract = {The DBpedia project provides Linked Data identifiers for currently 2.6 million things and serves a large knowledge base of structured information. DBpedia developed into the central interlinking hub for the Linking Open Data project, its URIs are used within named entity recognition services such as OpenCalais and annotation services such as Faviki, and the BBC started using DBpedia as their central semantic backbone. DBpedia's structured data serves as background information in the process interlinking datasets and provides a rich source of information for application developers. Beside making the DBpedia knowledge base available as linked data and RDF dumps, we offer a Lookup Service which can be used by applications to discover URIs for identifying concepts, and a SPARQL endpoint that can be retrieve data from the DBpedia knowledge base to be used in applications. This talk will give an introduction to DBpedia for web developers and an overview of DBpedia's development over the last year. We will demonstrate how DBpedia URIs are used for document annotation and how Web applications can via DBpedia facilitate Wikipedia as a source of structured knowledge.},
    Bdsk-url-1 = {http://www2009.eprints.org/228/},
    Date-modified = {2012-12-02 13:02:44 +0000},
    Keywords = {2009 kobilarov bizer auer lehmann event_www group_aksw dbpedia MOLE sys:relevantFor:infai sys:relevantFor:bis},
    Owner = {alex},
    Timestamp = {2010.06.28},
    Url = {http://www2009.eprints.org/228/}
    }

2008

  • J. Lehmann and P. Hitzler, “A Refinement Operator Based Learning Algorithm for the $\mathcal{ALC}$ Description Logic,” in Inductive Logic Programming, 17th International Conference, ILP 2007, Corvallis, OR, USA, June 19-21, 2007, 2008, pp. 147-160.
    [BibTeX] [Download PDF]
    @InProceedings{alc_learning_algorithm,
    Title = {A Refinement Operator Based Learning Algorithm for the {$\mathcal{ALC}$} Description Logic},
    Author = {Jens Lehmann and Pascal Hitzler},
    Booktitle = {Inductive Logic Programming, 17th International Conference, ILP 2007, Corvallis, OR, USA, June 19-21, 2007},
    Year = {2008},
    Note = {Best Student Paper Award},
    Pages = {147--160},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Volume = {4894},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2008/alc_learning_algorithm.pdf},
    Date-modified = {2012-12-02 13:08:07 +0000},
    Keywords = {2007 event_ilp group_aksw dllearner sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://jens-lehmann.org/files/2008/alc_learning_algorithm.pdf}
    }

  • J. Lehmann and S. Knappe, “DBpedia Navigator,” in ISWC Semantic Challenge Proceedings, 2008.
    [BibTeX] [Download PDF]
    @InProceedings{dbpedia_navigator,
    Title = {{DBpedia} {N}avigator},
    Author = {Jens Lehmann and Sebastian Knappe},
    Booktitle = {ISWC Semantic Challenge Proceedings},
    Year = {2008},
    Note = {Semantic Web Challenge, International Semantic Web Conference 2008},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2008/dbpedia_navigator.pdf},
    Keywords = {2008 group_aksw dllearner sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://jens-lehmann.org/files/2008/dbpedia_navigator.pdf}
    }

  • J. Lehmann and P. Hitzler, “Foundations of Refinement Operators for Description Logics,” in Inductive Logic Programming, 17th International Conference, ILP 2007, Corvallis, OR, USA, June 19-21, 2007, 2008, pp. 161-174.
    [BibTeX] [Download PDF]
    @InProceedings{property_analysis,
    Title = {Foundations of Refinement Operators for Description Logics},
    Author = {Jens Lehmann and Pascal Hitzler},
    Booktitle = {Inductive Logic Programming, 17th International Conference, ILP 2007, Corvallis, OR, USA, June 19-21, 2007},
    Year = {2008},
    Editor = {Hendrik Blockeel and Jan Ramon and Jude W. Shavlik and Prasad Tadepalli},
    Note = {Best Student Paper Award},
    Pages = {161--174},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Volume = {4894},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2008/operator_analysis.pdf},
    Keywords = {20007 group_aksw dllearner MOLE sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://jens-lehmann.org/files/2008/operator_analysis.pdf}
    }

  • J. Lehmann, S. Bader, and P. Hitzler, “Extracting Reduced Logic Programs from Artificial Neural Networks,” Applied Intelligence, 2008.
    [BibTeX] [Download PDF]
    @Article{ann_extraction,
    Title = {Extracting Reduced Logic Programs from Artificial Neural Networks},
    Author = {Jens Lehmann and Sebastian Bader and Pascal Hitzler},
    Journal = {Applied Intelligence},
    Year = {2008},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2008/ann_extraction.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/2da8ca1f0994dbac3bb46c1de25fb13fb/jens},
    Date-modified = {2012-12-02 13:07:51 +0000},
    Description = {publications Jens Lehmann},
    Keywords = {2008 group_aksw sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://jens-lehmann.org/files/2008/ann_extraction.pdf}
    }

  • S. Hellmann, J. Lehmann, and S. Auer, “Learning of OWL Class Descriptions on Very Large Knowledge Bases,” in Proceedings of the Poster and Demonstration Session at the 7th International Semantic Web Conference (ISWC2008), Karlsruhe, Germany, October 28, 2008, 2008.
    [BibTeX] [Download PDF]
    @InProceedings{sh_scalability_poster_08,
    Title = {Learning of {OWL} Class Descriptions on Very Large Knowledge Bases},
    Author = {Sebastian Hellmann and Jens Lehmann and S{\"o}ren Auer},
    Booktitle = {Proceedings of the Poster and Demonstration Session at the 7th International Semantic Web Conference ({ISWC2008}), Karlsruhe, Germany, October 28, 2008},
    Year = {2008},
    Editor = {Christian Bizer and Anupam Joshi},
    Publisher = {CEUR-WS.org},
    Series = {CEUR Workshop Proceedings},
    Volume = {401},
    Bdsk-url-1 = {http://ceur-ws.org/Vol-401/iswc2008pd_submission_83.pdf},
    Bibdate = {2008-10-29},
    Bibsource = {DBLP, http://dblp.uni-trier.de/db/conf/semweb/iswc2008p.html#HellmannLA08},
    Biburl = {http://www.bibsonomy.org/bibtex/267bb52f69f2a430165d1266d6d7182de/jens},
    Date-modified = {2012-12-02 17:23:20 +0000},
    Description = {publications Jens Lehmann},
    Keywords = {2008 group_aksw dllearner hellmann kilt lehmann auer sys:relevantFor:infai sys:relevantFor:bis event_iswc},
    Url = {http://ceur-ws.org/Vol-401/iswc2008pd_submission_83.pdf}
    }

  • S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and Z. Ives, “DBpedia: A Nucleus for a Web of Open Data,” in Proceedings of the 6th International Semantic Web Conference (ISWC), 2008, pp. 722-735. doi:doi:10.1007/978-3-540-76298-0_52
    [BibTeX]
    @InProceedings{dbpedia_iswc,
    Title = {{DB}pedia: A Nucleus for a Web of Open Data},
    Author = {S{\"o}ren Auer and Chris Bizer and Georgi Kobilarov and Jens Lehmann and Richard Cyganiak and Zachary Ives},
    Booktitle = {Proceedings of the 6th International Semantic Web Conference (ISWC)},
    Year = {2008},
    Pages = {722--735},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Volume = {4825},
    Bdsk-url-1 = {http://dx.doi.org/10.1007/978-3-540-76298-0_52},
    Biburl = {http://www.bibsonomy.org/bibtex/267584f8869a3211f5ae708e3757c7242/jens},
    Date-modified = {2012-12-02 17:24:20 +0000},
    Description = {publications Jens Lehmann},
    Doi = {doi:10.1007/978-3-540-76298-0_52},
    Keywords = {2008 event_iswc group_aksw dbpedia sys:relevantFor:infai sys:relevantFor:bis peer-reviewed ontowiki_eu auer lehmann}
    }

2007

  • T. Riechert, K. Lauenroth, and J. Lehmann, “Semantisch unterstütztes Requirements Engineering,” in Proceedings of the SABRE-07 SoftWiki Workshop, 2007.
    [BibTeX] [Download PDF]
    @InProceedings{swore_sabre,
    Title = {Semantisch unterst{\"u}tztes Requirements Engineering},
    Author = {Thomas Riechert and Kim Lauenroth and Jens Lehmann},
    Booktitle = {Proceedings of the SABRE-07 SoftWiki Workshop},
    Year = {2007},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2007/swore_sabre.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/203949b1faf8234885a317d498a4cac69/jens},
    Date-modified = {2012-12-02 12:58:53 +0000},
    Description = {publications Jens Lehmann},
    Keywords = {language_deutsch 2007 group_aksw softwiki sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://jens-lehmann.org/files/2007/swore_sabre.pdf}
    }

  • T. Riechert, K. Lauenroth, J. Lehmann, and S. Auer, “Towards Semantic based Requirements Engineering,” in Proceedings of the 7th International Conference on Knowledge Management (I-KNOW), 2007.
    [BibTeX] [Download PDF]
    @InProceedings{swore,
    Title = {Towards Semantic based Requirements Engineering},
    Author = {Thomas Riechert and Kim Lauenroth and Jens Lehmann and S{\"o}ren Auer},
    Booktitle = {Proceedings of the 7th International Conference on Knowledge Management (I-KNOW)},
    Year = {2007},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2007/swore.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/2c10c6638dfd115382cdff7dbd853de63/jens},
    Date-modified = {2012-12-02 12:59:02 +0000},
    Description = {publications Jens Lehmann},
    Keywords = {2007 event_i-know group_aksw softwiki sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://jens-lehmann.org/files/2007/swore.pdf}
    }

  • S. Auer, S. Dietzold, J. Lehmann, and T. Riechert, “OntoWiki: A Tool for Social, Semantic Collaboration,” in Proceedings of the Workshop on Social and Collaborative Construction of Structured Knowledge (CKC 2007) at the 16th International World Wide Web Conference (WWW2007) Banff, Canada, May 8, 2007, 2007.
    [BibTeX] [Download PDF]
    @InProceedings{ontowiki_www,
    Title = {Onto{W}iki: {A} Tool for Social, Semantic Collaboration},
    Author = {S{\"o}ren Auer and Sebastian Dietzold and Jens Lehmann and Thomas Riechert},
    Booktitle = {Proceedings of the Workshop on Social and Collaborative Construction of Structured Knowledge ({CKC} 2007) at the 16th International World Wide Web Conference ({WWW2007}) Banff, Canada, May 8, 2007},
    Year = {2007},
    Editor = {Natalya Fridman Noy and Harith Alani and Gerd Stumme and Peter Mika and York Sure and Denny Vrandecic},
    Publisher = {CEUR-WS.org},
    Series = {CEUR Workshop Proceedings},
    Volume = {273},
    Bdsk-url-1 = {http://ceur-ws.org/Vol-273/paper_91.pdf},
    Bibdate = {2008-05-30},
    Bibsource = {DBLP, http://dblp.uni-trier.de/db/conf/www/ckc2007.html#AuerDLR07},
    Biburl = {http://www.bibsonomy.org/bibtex/2e646431fba0e629258b74292ab8fcffd/jens},
    Description = {publications Jens Lehmann},
    Keywords = {2007 group_aksw auer tramp lehmann riechert ontowiki sys:relevantFor:infai sys:relevantFor:bis seebiproject_OntoWiki},
    Url = {http://ceur-ws.org/Vol-273/paper_91.pdf}
    }

  • J. Lehmann and P. Hitzler, “Foundations of Refinement Operators for Description Logics,” University of Leipzig 2007.
    [BibTeX] [Download PDF]
    @TechReport{property_analysis_techreport,
    Title = {Foundations of Refinement Operators for Description Logics},
    Author = {Jens Lehmann and Pascal Hitzler},
    Institution = {University of Leipzig},
    Year = {2007},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2007_operator_analysis_techreport.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/28c37a8f80a2a5e3b92230321d0599df3/jens},
    Description = {publications Jens Lehmann},
    Keywords = {2007 group_aksw dllearner sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://jens-lehmann.org/files/2007_operator_analysis_techreport.pdf}
    }

  • J. Lehmann and P. Hitzler, “A Refinement Operator Based Learning Algorithm for the $\mathcal{ALC}$ Description Logic,” University of Leipzig 2007.
    [BibTeX] [Download PDF]
    @TechReport{alc_learning_algorithm_techreport,
    Title = {A Refinement Operator Based Learning Algorithm for the {$\mathcal{ALC}$} Description Logic},
    Author = {Jens Lehmann and Pascal Hitzler},
    Institution = {University of Leipzig},
    Year = {2007},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2007_alc_learning_algorithm_techreport.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/24d90f2406fb3f2d3e257aff81e66a39d/jens},
    Date-modified = {2012-12-02 13:08:00 +0000},
    Description = {publications Jens Lehmann},
    Keywords = {2007 group_aksw dllearner sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://jens-lehmann.org/files/2007_alc_learning_algorithm_techreport.pdf}
    }

  • S. Auer and J. Lehmann, “What Have Innsbruck and Leipzig in Common? Extracting Semantics from Wiki Content,” in Proceedings of the ESWC (2007), Berlin / Heidelberg, 2007, pp. 503-517.
    [BibTeX] [Download PDF]
    @InProceedings{dbpedia_eswc,
    Title = {What Have {Innsbruck} and {Leipzig} in Common? Extracting Semantics from Wiki Content},
    Author = {S{\"o}ren Auer and Jens Lehmann},
    Booktitle = {Proceedings of the ESWC (2007)},
    Year = {2007},
    Address = {Berlin / Heidelberg},
    Pages = {503--517},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Volume = {4519},
    Bdsk-url-1 = {http://jens-lehmann.org/files/2007/wiki_extraction.pdf},
    Biburl = {http://www.bibsonomy.org/bibtex/28ddb3c68b29707dc830b9a0cd0fa6083/jens},
    Date-modified = {2012-12-02 13:06:40 +0000},
    Description = {publications Jens Lehmann},
    ISBN = {978-3-540-72666-1},
    Keywords = {2007 event_eswc group_aksw dbpedia sys:relevantFor:infai sys:relevantFor:bis lehmann auer},
    Url = {http://jens-lehmann.org/files/2007/wiki_extraction.pdf}
    }

  • J. Lehmann, “Hybrid Learning of Ontology Classes,” in Proc. of the 5th Int. Conference on Machine Learning and Data Mining MLDM, 2007, pp. 883-898.
    [BibTeX] [Download PDF]
    @InProceedings{mldm07,
    Title = {Hybrid Learning of Ontology Classes},
    Author = {Jens Lehmann},
    Booktitle = {Proc. of the 5th Int. Conference on Machine Learning and Data Mining {MLDM}},
    Year = {2007},
    Pages = {883--898},
    Publisher = {Springer},
    Series = {Lecture Notes in Computer Science},
    Volume = {4571},
    Bdsk-url-1 = {http://dx.doi.org/10.1007/978-3-540-73499-4_66},
    Bibdate = {2007-08-29},
    Bibsource = {DBLP, http://dblp.uni-trier.de/db/conf/mldm/mldm2007.html\#Lehmann07},
    Biburl = {http://www.bibsonomy.org/bibtex/241e5c2a1eecd3580f29b2ed93244e8e9/jens},
    Date-modified = {2012-12-02 13:01:33 +0000},
    Description = {publications Jens Lehmann},
    ISBN = {978-3-540-73498-7},
    Keywords = {2007 group_aksw dllearner event_mldm sys:relevantFor:infai sys:relevantFor:bis lehmann},
    Url = {http://dx.doi.org/10.1007/978-3-540-73499-4_66}
    }

  • T. Riechert, K. Lauenroth, and J. Lehmann, “SWORE – SoftWiki Ontology for Requirements Engineering,” in Proceedings of the 1st Conference on Social Semantic Web, 2007.
    [BibTeX] [Download PDF]
    @InProceedings{riechert_swore_2007,
    Title = {{SWORE} - {SoftWiki} Ontology for Requirements Engineering},
    Author = {Thomas Riechert and Kim Lauenroth and Jens Lehmann},
    Booktitle = {Proceedings of the 1st Conference on Social Semantic Web},
    Year = {2007},
    Editor = {S{\"o}ren Auer and Christian Bizer and Claudia M{\"u}ller and Anna Zhdanova},
    Month = sep,
    Publisher = {Bonner K{\"o}llen Verlag},
    Series = {{GI-Edition} - Lecture Notes in Informatics {(LNI),} {ISSN} 1617-5468},
    Volume = {P-113},
    Bdsk-url-1 = {http://aksw.org/cssw07/softwiki/1_riechert.pdf},
    Keywords = {2007 CSSW event_cssw group_aksw riechert lehmann sys:relevantFor:infai sys:relevantFor:bis},
    Owner = {seebi},
    Timestamp = {2010.01.22},
    Url = {http://aksw.org/cssw07/softwiki/1_riechert.pdf}
    }

  • J. Lehmann, J. Schüppel, and S. Auer, “Discovering Unknown Connections – the DBpedia Relationship Finder,” in Proceedings of 1st Conference on Social Semantic Web. Leipzig (CSSW’07), 24.-28. September, 2007.
    [BibTeX] [Download PDF]
    @InProceedings{lehmann-?-2007--,
    Title = {Discovering Unknown Connections - the {DBpedia} Relationship Finder},
    Author = {Jens Lehmann and J{\"o}rg Sch{\"u}ppel and S{\"o}ren Auer},
    Booktitle = {Proceedings of 1st Conference on Social Semantic Web. Leipzig (CSSW'07), 24.-28. September},
    Year = {2007},
    Month = {September},
    Publisher = {Bonner K{\"o}llen Verlag},
    Series = {Lecture Notes in Informatics (LNI)},
    Volume = {P-113 of GI-Edition},
    Bdsk-url-1 = {http://www.informatik.uni-leipzig.de/~auer/publication/relfinder.pdf},
    Keywords = {2007 event_cssw lehmann sch{\"u}ppel auer group_aksw sys:relevantFor:infai sys:relevantFor:bis},
    Owner = {alex},
    Timestamp = {2010.06.28},
    Url = {http://www.informatik.uni-leipzig.de/~auer/publication/relfinder.pdf}
    }

  • A. Zaveri, D. Kontokostas, M. A. Sherif, L. Bühmann, M. Morsey, S. Auer, and J. Lehmann, “User-driven Quality Evaluation of DBpedia.” , pp. 97-104.
    [BibTeX] [Abstract] [Download PDF]
    Linked Open Data (LOD) comprises of an unprecedented volume of structured datasets on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowdsourced and even extracted data of relatively low quality. We present a methodology for assessing the quality of linked data resources, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia. We identified 17 data quality problem types and 58 users assessed a total of 521 resources. Overall, 11.93\% of the evaluated DBpedia triples were identified to have some quality issues. Applying the semi-automatic component yielded a total of 222,982 triples that have a high probability to be incorrect. In particular, we found that problems such as object values being incorrectly extracted, irrelevant extraction of information and broken links were the most recurring quality problems. With this study, we not only aim to assess the quality of this sample of DBpedia resources but also adopt an agile methodology to improve the quality in future versions by regularly providing feedback to the DBpedia maintainers.

    @InProceedings{Zaveri,
    Title = {User-driven Quality Evaluation of DBpedia},
    Author = {Amrapali Zaveri and Dimitris Kontokostas and Mohamed A. Sherif and Lorenz B\"uhmann and Mohamed Morsey and S\"oren Auer and Jens Lehmann},
    Pages = {97--104},
    Abstract = {Linked Open Data (LOD) comprises of an unprecedented volume of structured datasets on the Web. However, these datasets are of varying quality ranging from extensively curated datasets to crowdsourced and even extracted data of relatively low quality. We present a methodology for assessing the quality of linked data resources, which comprises of a manual and a semi-automatic process. The first phase includes the detection of common quality problems and their representation in a quality problem taxonomy. In the manual process, the second phase comprises of the evaluation of a large number of individual resources, according to the quality problem taxonomy via crowdsourcing. This process is accompanied by a tool wherein a user assesses an individual resource and evaluates each fact for correctness. The semi-automatic process involves the generation and verification of schema axioms. We report the results obtained by applying this methodology to DBpedia. We identified 17 data quality problem types and 58 users assessed a total of 521 resources. Overall, 11.93\% of the evaluated DBpedia triples were identified to have some quality issues. Applying the semi-automatic component yielded a total of 222,982 triples that have a high probability to be incorrect. In particular, we found that problems such as object values being incorrectly extracted, irrelevant extraction of information and broken links were the most recurring quality problems. With this study, we not only aim to assess the quality of this sample of DBpedia resources but also adopt an agile methodology to improve the quality in future versions by regularly providing feedback to the DBpedia maintainers.},
    Bdsk-url-1 = {http://svn.aksw.org/papers/2013/ISemantics_DBpediaDQ/public.pdf},
    Crossref = {ISEMANTICS2013},
    Date-modified = {2013-07-11 19:42:39 +0000},
    Ee = {http://doi.acm.org/10.1145/2506182.2506195},
    Keywords = {zaveri sherif morsey buemann kontokostas auer lehmann group_aksw sys:relevantFor:infai sys:relevantFor:bis sys:relevantFor:lod2 lod2page 2013 event_I-Semantics dbpediadq sys:relevantFor:geoknow},
    Owner = {soeren},
    Timestamp = {2013.06.01},
    Url = {http://svn.aksw.org/papers/2013/ISemantics_DBpediaDQ/public.pdf}
    }