Evaluating Dialogue Generation Systems via Response Selection
- URL: http://arxiv.org/abs/2004.14302v1
- Date: Wed, 29 Apr 2020 16:21:50 GMT
- Title: Evaluating Dialogue Generation Systems via Response Selection
- Authors: Shiki Sato, Reina Akama, Hiroki Ouchi, Jun Suzuki, Kentaro Inui
- Abstract summary: We propose a method to construct response selection test sets with well-chosen false candidates.
We demonstrate that evaluating systems via response selection with the test sets developed by our method correlates more strongly with human evaluation.
- Score: 42.56640173047927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing automatic evaluation metrics for open-domain dialogue response
generation systems correlate poorly with human evaluation. We focus on
evaluating response generation systems via response selection. To evaluate
systems properly via response selection, we propose the method to construct
response selection test sets with well-chosen false candidates. Specifically,
we propose to construct test sets filtering out some types of false candidates:
(i) those unrelated to the ground-truth response and (ii) those acceptable as
appropriate responses. Through experiments, we demonstrate that evaluating
systems via response selection with the test sets developed by our method
correlates more strongly with human evaluation, compared with widely used
automatic evaluation metrics such as BLEU.
Related papers
- PairEval: Open-domain Dialogue Evaluation with Pairwise Comparison [38.03304773600225]
PairEval is a novel dialogue evaluation metric for assessing responses by comparing their quality against responses in different conversations.
We show that PairEval exhibits a higher correlation with human judgments than baseline metrics.
We also find that the proposed comparative metric is more robust in detecting common failures from open-domain dialogue systems.
arXiv Detail & Related papers (2024-04-01T09:35:06Z) - SQUARE: Automatic Question Answering Evaluation using Multiple Positive
and Negative References [73.67707138779245]
We propose a new evaluation metric: SQuArE (Sentence-level QUestion AnsweRing Evaluation)
We evaluate SQuArE on both sentence-level extractive (Answer Selection) and generative (GenQA) QA systems.
arXiv Detail & Related papers (2023-09-21T16:51:30Z) - PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded
Dialogue Systems [59.1250765143521]
Current knowledge-grounded dialogue systems often fail to align the generated responses with human-preferred qualities.
We propose Polished & Informed Candidate Scoring (PICK), a generation re-scoring framework.
We demonstrate the effectiveness of PICK in generating responses that are more faithful while keeping them relevant to the dialogue history.
arXiv Detail & Related papers (2023-09-19T08:27:09Z) - Reranking Overgenerated Responses for End-to-End Task-Oriented Dialogue
Systems [71.33737787564966]
End-to-end (E2E) task-oriented dialogue (ToD) systems are prone to fall into the so-called 'likelihood trap'
We propose a reranking method which aims to select high-quality items from the lists of responses initially overgenerated by the system.
Our methods improve a state-of-the-art E2E ToD system by 2.4 BLEU, 3.2 ROUGE, and 2.8 METEOR scores, achieving new peak results.
arXiv Detail & Related papers (2022-11-07T15:59:49Z) - Pneg: Prompt-based Negative Response Generation for Dialogue Response
Selection Task [27.513992470527427]
In retrieval-based dialogue systems, a response selection model acts as a ranker to select the most appropriate response among several candidates.
Recent studies have shown that leveraging adversarial responses as negative training samples is useful for improving the discriminating power of the selection model.
This paper proposes a simple but efficient method for generating adversarial negative responses leveraging a large-scale language model.
arXiv Detail & Related papers (2022-10-31T11:49:49Z) - A Systematic Evaluation of Response Selection for Open Domain Dialogue [36.88551817451512]
We curated a dataset where responses from multiple response generators produced for the same dialog context are manually annotated as appropriate (positive) and inappropriate (negative)
We conduct a systematic evaluation of state-of-the-art methods for response selection, and demonstrate that both strategies of using multiple positive candidates and using manually verified hard negative candidates can bring in significant performance improvement in comparison to using the adversarial training data, e.g., increase of 3% and 13% in Recall@1 score, respectively.
arXiv Detail & Related papers (2022-08-08T19:33:30Z) - Generate, Evaluate, and Select: A Dialogue System with a Response
Evaluator for Diversity-Aware Response Generation [9.247397520986999]
We aim to overcome the lack of diversity in responses of current dialogue systems.
We propose a generator-evaluator model that evaluates multiple responses generated by a response generator.
We conduct human evaluations to compare the output of the proposed system with that of a baseline system.
arXiv Detail & Related papers (2022-06-10T08:22:22Z) - What is wrong with you?: Leveraging User Sentiment for Automatic Dialog
Evaluation [73.03318027164605]
We propose to use information that can be automatically extracted from the next user utterance as a proxy to measure the quality of the previous system response.
Our model generalizes across both spoken and written open-domain dialog corpora collected from real and paid users.
arXiv Detail & Related papers (2022-03-25T22:09:52Z) - User Response and Sentiment Prediction for Automatic Dialogue Evaluation [69.11124655437902]
We propose to use the sentiment of the next user utterance for turn or dialog level evaluation.
Experiments show our model outperforming existing automatic evaluation metrics on both written and spoken open-domain dialogue datasets.
arXiv Detail & Related papers (2021-11-16T22:19:17Z) - Designing Precise and Robust Dialogue Response Evaluators [35.137244385158034]
We propose to build a reference-free evaluator and exploit the power of semi-supervised training and pretrained language models.
Experimental results demonstrate that the proposed evaluator achieves a strong correlation (> 0.6) with human judgement.
arXiv Detail & Related papers (2020-04-10T04:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.