We are very pleased to announce that our group got a paper accepted for presentation at IJCNN 2021. The annual International Joint Conference on Neural Networks (IJCNN) is the flagship conference of the IEEE Computational Intelligence Society and the International Neural Network Society. It covers a wide range of topics in the field of neural networks, from biological neural network modeling to artificial neural computation.
Here is the abstract and the link to the paper:
Multiple Run Ensemble Learning with Low-Dimensional Knowledge Graph Embeddings
By
Chengjin Xu,
Mojtaba Nayyeri,
Sahar Vahdati, and
Jens Lehmann.
Abstract
Knowledge graphs (KGs) represent world facts in a structured form. Although knowledge graphs are quantitatively huge and consist of millions of triples, the coverage is still only a small fraction of world’s knowledge. Among the top approaches of recent years, link prediction using knowledge graph embedding (KGE) models has gained significant attention for knowledge graph completion. Various embedding models have been proposed so far, among which, some recent KGE models obtain state-of-the-art performance on link prediction tasks by using embeddings with a high dimension (e.g. 1000) which accelerate the costs of training and evaluation considering the large scale of KGs. In this paper, we propose a simple but effective performance boosting strategy for KGE models by using multiple low dimensions in different repetition rounds of the same model. For example, instead of training a model one time with a large embedding size of 1200, we repeat the training of the model 6 times in parallel with an embedding size of 200 and then combine the 6 separate models for testing while the overall numbers of adjustable parameters are same (6*200=1200) and the total memory footprint remains the same. We show that our approach enables different models to better cope with their expressiveness issues on modeling various graph patterns such as symmetric, 1-n, n-1 and n-n. In order to justify our findings, we conduct experiments on various KGE models. Experimental results on standard benchmark datasets, namely FB15K, FB15K-237 and WN18RR, show that multiple low-dimensional models of the same kind outperform the corresponding single high-dimensional models on link prediction in a certain range and have advantages in training efficiency by using parallel training while the overall numbers of adjustable parameters are same.