Paper Accepted at LREC 2020

We are very pleased to announce that our group got a paper accepted for presentation at LREC 2020 (International Conference on Language Resources and Evaluation). LREC has become the major event on Language Resources (LRs) and Evaluation for Language Technologies (LT). The aim of LREC is to provide an overview of the state-of-the-art, explore new R&D directions and emerging trends, exchange information regarding LRs and their applications, evaluation methodologies and tools, ongoing and planned activities, industrial uses and needs, requirements coming from the e-society, both with respect to policy issues and to technological and organisational ones.

Here is the pre-print of the accepted paper with its abstract:

Treating Dialogue Quality Evaluation as an Anomaly Detection Problem
By Rostislav Nedelchev, Jens Lehmann and Ricardo Usbeck.
Abstract Dialogue systems for interaction with humans have been enjoying increased popularity in the research and industry fields. To this day, the best way to estimate their success is through means of human evaluation and not automated approaches, despite the abundance of work done in the field. In this paper, we investigate the effectiveness of perceiving dialogue evaluation as an anomaly detection task. The paper looks into four dialogue modeling approaches and how their objective functions correlate with human annotation scores. A high-level perspective exhibits negative results. However, a more in-depth look shows limited potential for using anomaly detection for evaluating dialogues.