Approximating Online Human Evaluation of Social Chatbots with Prompting
- URL: http://arxiv.org/abs/2304.05253v2
- Date: Fri, 25 Aug 2023 12:58:41 GMT
- Title: Approximating Online Human Evaluation of Social Chatbots with Prompting
- Authors: Ekaterina Svikhnushina and Pearl Pu
- Abstract summary: Existing evaluation metrics aim to automate offline user evaluation and approximate human judgment of pre-curated dialogs.
We propose an approach to approximate online human evaluation leveraging large language models (LLMs) from the GPT family.
We introduce a new Dialog system Evaluation framework based on Prompting (DEP), which enables a fully automatic evaluation pipeline.
- Score: 11.657633779338724
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As conversational models become increasingly available to the general public,
users are engaging with this technology in social interactions. Such
unprecedented interaction experiences may pose considerable social and
psychological risks to the users unless the technology is properly controlled.
This highlights the need for scalable and robust evaluation metrics for
conversational chatbots. Existing evaluation metrics aim to automate offline
user evaluation and approximate human judgment of pre-curated dialogs. However,
they are limited in their ability to capture subjective perceptions of users
who actually interact with the bots and might not generalize to real-world
settings. To address this limitation, we propose an approach to approximate
online human evaluation leveraging large language models (LLMs) from the GPT
family. We introduce a new Dialog system Evaluation framework based on
Prompting (DEP), which enables a fully automatic evaluation pipeline that
replicates live user studies and achieves an impressive correlation with human
judgment (up to Pearson r=0.95 on a system level). The DEP approach involves
collecting synthetic chat logs of evaluated bots with an LLM in the other-play
setting, where the LLM is carefully conditioned to follow a specific scenario.
We further explore different prompting approaches to produce evaluation scores
with the same LLM. The best performing prompts, which contain few-shot
demonstrations and instructions, show outstanding performance on the tested
dataset and demonstrate the ability to generalize to other dialog corpora.
Related papers
- Synthetic Dialogue Dataset Generation using LLM Agents [7.933485970511388]
We develop two agents that "talk" to each other, one acting as the conversational agent, and the other acting as the user.
Using a set of text descriptions of linear problems from NL4Opt available to the user only, the agent and the user engage in conversation until the agent has retrieved all key information from the original problem description.
We conduct human and automatic evaluations, including an evaluation approach that uses GPT-4 to mimic the human evaluation metrics.
arXiv Detail & Related papers (2024-01-30T21:49:30Z) - Rethinking the Evaluation for Conversational Recommendation in the Era
of Large Language Models [115.7508325840751]
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs)
In this paper, we embark on an investigation into the utilization of ChatGPT for conversational recommendation, revealing the inadequacy of the existing evaluation protocol.
We propose an interactive Evaluation approach based on LLMs named iEvaLM that harnesses LLM-based user simulators.
arXiv Detail & Related papers (2023-05-22T15:12:43Z) - Is MultiWOZ a Solved Task? An Interactive TOD Evaluation Framework with
User Simulator [37.590563896382456]
We propose an interactive evaluation framework for Task-Oriented Dialogue (TOD) systems.
We first build a goal-oriented user simulator based on pre-trained models and then use the user simulator to interact with the dialogue system to generate dialogues.
Experimental results show that RL-based TOD systems trained by our proposed user simulator can achieve nearly 98% inform and success rates.
arXiv Detail & Related papers (2022-10-26T07:41:32Z) - Dialogue Evaluation with Offline Reinforcement Learning [2.580163308334609]
Task-oriented dialogue systems aim to fulfill user goals through natural language interactions.
They are ideally evaluated with human users, which is unattainable to do at every iteration of the development phase.
We propose the use of offline reinforcement learning for dialogue evaluation based on a static corpus.
arXiv Detail & Related papers (2022-09-02T08:32:52Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - What is wrong with you?: Leveraging User Sentiment for Automatic Dialog
Evaluation [73.03318027164605]
We propose to use information that can be automatically extracted from the next user utterance as a proxy to measure the quality of the previous system response.
Our model generalizes across both spoken and written open-domain dialog corpora collected from real and paid users.
arXiv Detail & Related papers (2022-03-25T22:09:52Z) - User Response and Sentiment Prediction for Automatic Dialogue Evaluation [69.11124655437902]
We propose to use the sentiment of the next user utterance for turn or dialog level evaluation.
Experiments show our model outperforming existing automatic evaluation metrics on both written and spoken open-domain dialogue datasets.
arXiv Detail & Related papers (2021-11-16T22:19:17Z) - Towards Automatic Evaluation of Dialog Systems: A Model-Free Off-Policy
Evaluation Approach [84.02388020258141]
We propose a new framework named ENIGMA for estimating human evaluation scores based on off-policy evaluation in reinforcement learning.
ENIGMA only requires a handful of pre-collected experience data, and therefore does not involve human interaction with the target policy during the evaluation.
Our experiments show that ENIGMA significantly outperforms existing methods in terms of correlation with human evaluation scores.
arXiv Detail & Related papers (2021-02-20T03:29:20Z) - Learning an Unreferenced Metric for Online Dialogue Evaluation [53.38078951628143]
We propose an unreferenced automated evaluation metric that uses large pre-trained language models to extract latent representations of utterances.
We show that our model achieves higher correlation with human annotations in an online setting, while not requiring true responses for comparison during inference.
arXiv Detail & Related papers (2020-05-01T20:01:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.