Bipartite-play Dialogue Collection for Practical Automatic Evaluation of
Dialogue Systems
- URL: http://arxiv.org/abs/2211.10596v1
- Date: Sat, 19 Nov 2022 06:12:50 GMT
- Title: Bipartite-play Dialogue Collection for Practical Automatic Evaluation of
Dialogue Systems
- Authors: Shiki Sato, Yosuke Kishinami, Hiroaki Sugiyama, Reina Akama, Ryoko
Tokuhisa, Jun Suzuki
- Abstract summary: This paper introduces the bipartite-play method, a dialogue collection method for automating dialogue system evaluation.
It addresses the limitations of existing dialogue collection methods.
Experimental results show that the automatic evaluation using the bipartite-play method mitigates these two drawbacks.
- Score: 17.532851422548354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automation of dialogue system evaluation is a driving force for the efficient
development of dialogue systems. This paper introduces the bipartite-play
method, a dialogue collection method for automating dialogue system evaluation.
It addresses the limitations of existing dialogue collection methods: (i)
inability to compare with systems that are not publicly available, and (ii)
vulnerability to cheating by intentionally selecting systems to be compared.
Experimental results show that the automatic evaluation using the
bipartite-play method mitigates these two drawbacks and correlates as strongly
with human subjectivity as existing methods.
Related papers
- Don't Forget Your ABC's: Evaluating the State-of-the-Art in
Chat-Oriented Dialogue Systems [12.914512702731528]
This paper presents a novel human evaluation method to estimate the rates of many dialogue system behaviors.
Our method is used to evaluate four state-of-the-art open-domain dialogue systems and compared with existing approaches.
arXiv Detail & Related papers (2022-12-18T22:07:55Z) - User Response and Sentiment Prediction for Automatic Dialogue Evaluation [69.11124655437902]
We propose to use the sentiment of the next user utterance for turn or dialog level evaluation.
Experiments show our model outperforming existing automatic evaluation metrics on both written and spoken open-domain dialogue datasets.
arXiv Detail & Related papers (2021-11-16T22:19:17Z) - DynaEval: Unifying Turn and Dialogue Level Evaluation [60.66883575106898]
We propose DynaEval, a unified automatic evaluation framework.
It is capable of performing turn-level evaluation, but also holistically considers the quality of the entire dialogue.
Experiments show that DynaEval significantly outperforms the state-of-the-art dialogue coherence model.
arXiv Detail & Related papers (2021-06-02T12:23:18Z) - Assessing Dialogue Systems with Distribution Distances [48.61159795472962]
We propose to measure the performance of a dialogue system by computing the distribution-wise distance between its generated conversations and real-world conversations.
Experiments on several dialogue corpora show that our proposed metrics correlate better with human judgments than existing metrics.
arXiv Detail & Related papers (2021-05-06T10:30:13Z) - Towards Unified Dialogue System Evaluation: A Comprehensive Analysis of
Current Evaluation Protocols [17.14709845342071]
The current state of affairs suggests various evaluation protocols to assess chat-oriented dialogue management systems.
This paper presents a comprehensive synthesis of both automated and human evaluation methods on dialogue systems.
arXiv Detail & Related papers (2020-06-10T23:29:05Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Is Your Goal-Oriented Dialog Model Performing Really Well? Empirical
Analysis of System-wise Evaluation [114.48767388174218]
This paper presents an empirical analysis on different types of dialog systems composed of different modules in different settings.
Our results show that a pipeline dialog system trained using fine-grained supervision signals at different component levels often obtains better performance than the systems that use joint or end-to-end models trained on coarse-grained labels.
arXiv Detail & Related papers (2020-05-15T05:20:06Z) - Learning an Unreferenced Metric for Online Dialogue Evaluation [53.38078951628143]
We propose an unreferenced automated evaluation metric that uses large pre-trained language models to extract latent representations of utterances.
We show that our model achieves higher correlation with human annotations in an online setting, while not requiring true responses for comparison during inference.
arXiv Detail & Related papers (2020-05-01T20:01:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.