MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue
Evaluation
- URL: http://arxiv.org/abs/2112.07194v1
- Date: Tue, 14 Dec 2021 07:01:20 GMT
- Title: MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue
Evaluation
- Authors: Chen Zhang, Luis Fernando D'Haro, Thomas Friedrichs and Haizhou Li
- Abstract summary: A dialogue evaluator is expected to conduct assessment across domains as well.
Most of the state-of-the-art automatic dialogue evaluation metrics (ADMs) are not designed for multi-domain evaluation.
We are motivated to design a general and robust framework, MDD-Eval, to address the problem.
- Score: 66.60285024216573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chatbots are designed to carry out human-like conversations across different
domains, such as general chit-chat, knowledge exchange, and persona-grounded
conversations. To measure the quality of such conversational agents, a dialogue
evaluator is expected to conduct assessment across domains as well. However,
most of the state-of-the-art automatic dialogue evaluation metrics (ADMs) are
not designed for multi-domain evaluation. We are motivated to design a general
and robust framework, MDD-Eval, to address the problem. Specifically, we first
train a teacher evaluator with human-annotated data to acquire a rating skill
to tell good dialogue responses from bad ones in a particular domain and then,
adopt a self-training strategy to train a new evaluator with teacher-annotated
multi-domain data, that helps the new evaluator to generalize across multiple
domains. MDD-Eval is extensively assessed on six dialogue evaluation
benchmarks. Empirical results show that the MDD-Eval framework achieves a
strong performance with an absolute improvement of 7% over the state-of-the-art
ADMs in terms of mean Spearman correlation scores across all the evaluation
benchmarks.
Related papers
- ComperDial: Commonsense Persona-grounded Dialogue Dataset and Benchmark [26.100299485985197]
ComperDial consists of human-scored responses for 10,395 dialogue turns in 1,485 conversations collected from 99 dialogue agents.
In addition to single-turn response scores, ComperDial also contains dialogue-level human-annotated scores.
Building off ComperDial, we devise a new automatic evaluation metric to measure the general similarity of model-generated dialogues to human conversations.
arXiv Detail & Related papers (2024-06-17T05:51:04Z) - DiQAD: A Benchmark Dataset for End-to-End Open-domain Dialogue
Assessment [38.26039323208791]
We release a large-scale dialogue quality assessment dataset (DiQAD) for automatically assessing open-domain dialogue quality.
Specifically, we establish the assessment criteria based on the dimensions conforming to human judgements on dialogue qualities.
We also annotate large-scale dialogues that conversed between real users based on these annotation criteria, which contains around 100,000 dialogues.
arXiv Detail & Related papers (2023-10-25T03:04:57Z) - PoE: a Panel of Experts for Generalized Automatic Dialogue Assessment [58.46761798403072]
A model-based automatic dialogue evaluation metric (ADEM) is expected to perform well across multiple domains.
Despite significant progress, an ADEM that works well in one domain does not necessarily generalize to another.
We propose a Panel of Experts (PoE) network that consists of a shared transformer encoder and a collection of lightweight adapters.
arXiv Detail & Related papers (2022-12-18T02:26:50Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - MME-CRS: Multi-Metric Evaluation Based on Correlation Re-Scaling for
Evaluating Open-Domain Dialogue [15.31433922183745]
We propose a Multi-Metric Evaluation based on Correlation Re-Scaling (MME-CRS) for evaluating open-domain dialogue.
MME-CRS ranks first on the final test data of DSTC10 track5 subtask1 Automatic Open-domain Dialogue Evaluation Challenge with a large margin.
arXiv Detail & Related papers (2022-06-19T13:43:59Z) - DynaEval: Unifying Turn and Dialogue Level Evaluation [60.66883575106898]
We propose DynaEval, a unified automatic evaluation framework.
It is capable of performing turn-level evaluation, but also holistically considers the quality of the entire dialogue.
Experiments show that DynaEval significantly outperforms the state-of-the-art dialogue coherence model.
arXiv Detail & Related papers (2021-06-02T12:23:18Z) - Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach [84.02388020258141]
We propose a new framework named ENIGMA for estimating human evaluation scores based on off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation.
Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.
arXiv Detail & Related papers (2021-02-20T03:29:20Z) - Learning an Unreferenced Metric for Online Dialogue Evaluation [53.38078951628143]
We propose an unreferenced automated evaluation metric that uses large pre-trained language models to extract latent representations of utterances.
We show that our model achieves higher correlation with human annotations in an online setting, while not requiring true responses for comparison during inference.
arXiv Detail & Related papers (2020-05-01T20:01:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.