The International Joint Conference on Neural Networks (IJCNN) covers a wide range of topics in the field of neural networks, from biological neural networks to artificial neural computation. IJCNN 2020 will be featured by the IEEE World Congress on Computational Intelligence (IEEE WCCI), the world’s largest technical event in the field of computational intelligence. 
Here is the pre-print of the accepted paper with its abstract:
Let the Margin SlidE± for Knowledge Graph Embeddings via a Correntropy Objective
By 
Mojtaba Nayyeri, 
Xiaotian Zhou, 
Sahar Vahdati, 
Reza Izanloo, 
Hamed Shariat Yazdi and 
Jens Lehmann.
Abstract
Embedding models based on translation and rotation have gained significant attention in link prediction tasks for knowledge  graphs.  Most  of  the  earlier  works  have  modified  thescore function of Knowledge Graph Embedding models in order to improve  the performance of link prediction tasks. However, as proven theoretically and  experimentally, the performance of such Embedding models strongly depends on the loss function. One of the prominent approaches in defining loss functions is to set a margin  between  positive and negative  samples  during  the learning  process. This task is  particularly important because it directly affects the learning and ranking of triples and ultimately defines the final  output.  Approaches  for  setting  a  margin  have the  following  challenges:  a)  the length  of  the  margin  has  to be  fixed  manually,  b)  without  a  fixed  point  for  center of  the margin, the scores of positive triples are not necessarily enforced to be sufficiently  small  to  fulfill  the  translation/rotation  from head  to  tail  by  using  the  relation  vector.  In  this  paper,  we propose a family of loss functions dubbed SlidE± to address the aforementioned challenges. The formulation of the proposed lossfunctions  enables  an  automated  technique  to  adjust  the  length of  the  margin  adoptive  to  a  defined  center.  In  our  experiments on  a  set  of  standard  benchmark  datasets  including  Freebase and  WordNet,  the  effectiveness  of  our  approach  is  confirmed for  training  Knowledge  Graph  Embedding  models,  specifically TransE  and  RotatE  as  a  case  study,  on  link  prediction  tasks.