Paper on Knowledge Graph Integration into Transformer Architectures Accepted at ACL21

We are happy to announce that we got a paper accepted for presentation at ACL 2021 (Association for Computational Linguistics). ACL is a premier Natural Language Processing conference. In the paper, we investigate the efficient integration of knowledge graphs into Transformer-based decoder architectures. The approach allows to integrate knowledge graphs into large-scale language models like GPT-2 or GPT-3, which leads to more comprehensive and interesting dialogues with such models.

Here is the pre-print of the accepted paper with its abstract:

Space Efficient Context Encoding for Non-Task-Oriented Dialogue Generation with Graph Attention Transformer
By Fabian Galetzka, Jewgeni Rose, David Schlangen,Jens Lehmann.
Abstract To improve the coherence and knowledge retrieval capabilities of non task-oriented dialogue systems, recent Transformer-based models aim to integrate fixed background context. This often comes in the form of knowledge graphs, and the integration is done by creating pseudo utterances through paraphrasing knowledge triples, added into the accumulated dialogue context. However, the context length is fixed in these architectures, which restricts how much background or dialogue context can be kept. In this work, we propose a more concise encoding for background context structured in form of knowledge graphs, by expressing the graph connections through restrictions on the attention weights. The results of our human evaluation show, that this encoding reduces space requirements without negative effects on the precision of reproduction of knowledge and perceived consistency. Further, models trained with our proposed context encoding generate dialogues that are judged to be more comprehensive and interesting.