5* Knowledge Graph Embeddings with Projective Transformations Accepted At AAAI21

We are thrilled to announce that we got a paper accepted for presentation at AAAI Conference on Artificial Intelligence (AAAI-21). The purpose of the AAAI conference is to promote research in artificial intelligence (AI) and scientific exchange among AI researchers, practitioners, scientists, and engineers in affiliated disciplines.

Here is the pre-print of the accepted paper with its abstracts:

  • 5* Knowledge Graph Embeddings with Projective Transformations
    By Mojtaba Nayyeri,Sahar Vahdati,Can Aykul, and Jens Lehmann
    Abstract Performing link prediction using knowledge graph embedding (KGE) models is a popular approach for knowledge graph completion. Such link predictions are performed by measuring the likelihood of links in the graph via a transformation function that maps nodes via edges into a vector space. Since the complex structure of the real world is reflected in multi-relational knowledge graphs, the transformation functions need to be able to represent this complexity. However, most of the existing transformation functions in embedding models have been designed in Euclidean geometry and only cover one or two simple transformations. Therefore, they are prone to underfitting and limited in their ability to embed complex graph structures. The area of projective geometry, however, fully covers inversion, reflection, translation, rotation, and homothety transformations. We propose a novel KGE model, which supports those transformations and subsumes other state-of-the-art models. The model has several favorable theoretical properties and outperforms existing approaches on widely used link prediction benchmarks.

5*E covers 5 transformation types: Translation, Rotation, Inversion, Reflection, and Homothety, and covers 5 transformation functions: Hyperbolic, Parabolic, Loxodromic, Elliptic and Circular as shown in the following figure created by a Riemann sphere.

5*E applies the following steps to measure the plausibility of a triple (h,r,t):

  1. Mapping head node embedding (h) from complex plane to Riemann sphere using stereographic projection
  2. Moving the sphere using a relation specific transformation (r.)
  3. Mapping the transformed head (h) from Riemann sphere to the complex plane to meet tail embedding (t)

This way, our model is able to capture complex structures in the subgraphs of a knowledge graph, for example where a path of nodes is connected to a loop through multiple relations:

Starting from a plain grid, we can visualise how different transformations evolve in capturing different relational patterns, in this case the inverse relation hasPart and partOf:

5*E is able to preserve various graph structures (paths, loops) and relational patterns in the knowledge graph embedding space: