Is MultiWOZ a Solved Task? An Interactive TOD Evaluation Framework with
User Simulator
- URL: http://arxiv.org/abs/2210.14529v1
- Date: Wed, 26 Oct 2022 07:41:32 GMT
- Title: Is MultiWOZ a Solved Task? An Interactive TOD Evaluation Framework with
User Simulator
- Authors: Qinyuan Cheng, Linyang Li, Guofeng Quan, Feng Gao, Xiaofeng Mou,
Xipeng Qiu
- Abstract summary: We propose an interactive evaluation framework for Task-Oriented Dialogue (TOD) systems.
We first build a goal-oriented user simulator based on pre-trained models and then use the user simulator to interact with the dialogue system to generate dialogues.
Experimental results show that RL-based TOD systems trained by our proposed user simulator can achieve nearly 98% inform and success rates.
- Score: 37.590563896382456
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Task-Oriented Dialogue (TOD) systems are drawing more and more attention in
recent studies. Current methods focus on constructing pre-trained models or
fine-tuning strategies while the evaluation of TOD is limited by a policy
mismatch problem. That is, during evaluation, the user utterances are from the
annotated dataset while these utterances should interact with previous
responses which can have many alternatives besides annotated texts. Therefore,
in this work, we propose an interactive evaluation framework for TOD. We first
build a goal-oriented user simulator based on pre-trained models and then use
the user simulator to interact with the dialogue system to generate dialogues.
Besides, we introduce a sentence-level and a session-level score to measure the
sentence fluency and session coherence in the interactive evaluation.
Experimental results show that RL-based TOD systems trained by our proposed
user simulator can achieve nearly 98% inform and success rates in the
interactive evaluation of MultiWOZ dataset and the proposed scores measure the
response quality besides the inform and success rates. We are hoping that our
work will encourage simulator-based interactive evaluations in the TOD task.
Related papers
- Simulating Task-Oriented Dialogues with State Transition Graphs and Large Language Models [16.94819621353007]
SynTOD is a new synthetic data generation approach for developing end-to-end Task-Oriented Dialogue (TOD) systems.
It generates diverse, structured conversations through random walks and response simulation using large language models.
In our experiments, using graph-guided response simulations leads to significant improvements in intent classification, slot filling and response relevance.
arXiv Detail & Related papers (2024-04-23T06:23:34Z) - Reliable LLM-based User Simulator for Task-Oriented Dialogue Systems [2.788542465279969]
This paper introduces DAUS, a Domain-Aware User Simulator.
We fine-tune DAUS on real examples of task-oriented dialogues.
Results on two relevant benchmarks showcase significant improvements in terms of user goal fulfillment.
arXiv Detail & Related papers (2024-02-20T20:57:47Z) - JoTR: A Joint Transformer and Reinforcement Learning Framework for
Dialog Policy Learning [53.83063435640911]
Dialogue policy learning (DPL) is a crucial component of dialogue modelling.
We introduce a novel framework, JoTR, to generate flexible dialogue actions.
Unlike traditional methods, JoTR formulates a word-level policy that allows for a more dynamic and adaptable dialogue action generation.
arXiv Detail & Related papers (2023-09-01T03:19:53Z) - Rethinking the Evaluation for Conversational Recommendation in the Era
of Large Language Models [115.7508325840751]
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs)
In this paper, we embark on an investigation into the utilization of ChatGPT for conversational recommendation, revealing the inadequacy of the existing evaluation protocol.
We propose an interactive Evaluation approach based on LLMs named iEvaLM that harnesses LLM-based user simulators.
arXiv Detail & Related papers (2023-05-22T15:12:43Z) - Approximating Online Human Evaluation of Social Chatbots with Prompting [11.657633779338724]
Existing evaluation metrics aim to automate offline user evaluation and approximate human judgment of pre-curated dialogs.
We propose an approach to approximate online human evaluation leveraging large language models (LLMs) from the GPT family.
We introduce a new Dialog system Evaluation framework based on Prompting (DEP), which enables a fully automatic evaluation pipeline.
arXiv Detail & Related papers (2023-04-11T14:45:01Z) - Interactive Evaluation of Dialog Track at DSTC9 [8.2208199207543]
The Interactive Evaluation of Dialog Track was introduced at the 9th Dialog System Technology Challenge.
This paper provides an overview of the track, including the methodology and results.
arXiv Detail & Related papers (2022-07-28T22:54:04Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - Metaphorical User Simulators for Evaluating Task-oriented Dialogue
Systems [80.77917437785773]
Task-oriented dialogue systems ( TDSs) are assessed mainly in an offline setting or through human evaluation.
We propose a metaphorical user simulator for end-to-end TDS evaluation, where we define a simulator to be metaphorical if it simulates user's analogical thinking in interactions with systems.
We also propose a tester-based evaluation framework to generate variants, i.e., dialogue systems with different capabilities.
arXiv Detail & Related papers (2022-04-02T05:11:03Z) - Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach [84.02388020258141]
We propose a new framework named ENIGMA for estimating human evaluation scores based on off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation.
Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.
arXiv Detail & Related papers (2021-02-20T03:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.