Modeling and Utilizing User's Internal State in Movie Recommendation
Dialogue
- URL: http://arxiv.org/abs/2012.03118v1
- Date: Sat, 5 Dec 2020 20:50:53 GMT
- Title: Modeling and Utilizing User's Internal State in Movie Recommendation
Dialogue
- Authors: Takashi Kodama, Ribeka Tanaka, Sadao Kurohashi
- Abstract summary: We model the user's internal state (UIS) in dialogues and construct a dialogue system that changes its response based on the UIS.
We train the UIS estimators on a dialogue corpus with the modeled UIS's annotations.
We also design response change rules that change the system's responses according to each UIS.
- Score: 17.87695990289955
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intelligent dialogue systems are expected as a new interface between humans
and machines. Such an intelligent dialogue system should estimate the user's
internal state (UIS) in dialogues and change its response appropriately
according to the estimation result. In this paper, we model the UIS in
dialogues, taking movie recommendation dialogues as examples, and construct a
dialogue system that changes its response based on the UIS. Based on the
dialogue data analysis, we model the UIS as three elements: knowledge,
interest, and engagement. We train the UIS estimators on a dialogue corpus with
the modeled UIS's annotations. The estimators achieved high estimation
accuracy. We also design response change rules that change the system's
responses according to each UIS. We confirmed that response changes using the
result of the UIS estimators improved the system utterances' naturalness in
both dialogue-wise evaluation and utterance-wise evaluation.
Related papers
- Training Dialogue Systems by AI Feedback for Improving Overall Dialogue Impression [9.005722141359675]
This study prepared reward models corresponding to 12 metrics related to the impression of the entire dialogue for evaluating dialogue responses.
We tuned our dialogue models using the reward model signals as feedback to improve the impression of the system.
arXiv Detail & Related papers (2025-01-22T08:14:51Z) - ComperDial: Commonsense Persona-grounded Dialogue Dataset and Benchmark [26.100299485985197]
ComperDial consists of human-scored responses for 10,395 dialogue turns in 1,485 conversations collected from 99 dialogue agents.
In addition to single-turn response scores, ComperDial also contains dialogue-level human-annotated scores.
Building off ComperDial, we devise a new automatic evaluation metric to measure the general similarity of model-generated dialogues to human conversations.
arXiv Detail & Related papers (2024-06-17T05:51:04Z) - Are cascade dialogue state tracking models speaking out of turn in
spoken dialogues? [1.786898113631979]
This paper proposes a comprehensive analysis of the errors of state of the art systems in complex settings such as Dialogue State Tracking.
Based on spoken MultiWoz, we identify that errors on non-categorical slots' values are essential to address in order to bridge the gap between spoken and chat-based dialogue systems.
arXiv Detail & Related papers (2023-11-03T08:45:22Z) - GODEL: Large-Scale Pre-Training for Goal-Directed Dialog [119.1397031992088]
We introduce GODEL, a large pre-trained language model for dialog.
We show that GODEL outperforms state-of-the-art pre-trained dialog models in few-shot fine-tuning setups.
A novel feature of our evaluation methodology is the introduction of a notion of utility that assesses the usefulness of responses.
arXiv Detail & Related papers (2022-06-22T18:19:32Z) - User Response and Sentiment Prediction for Automatic Dialogue Evaluation [69.11124655437902]
We propose to use the sentiment of the next user utterance for turn or dialog level evaluation.
Experiments show our model outperforming existing automatic evaluation metrics on both written and spoken open-domain dialogue datasets.
arXiv Detail & Related papers (2021-11-16T22:19:17Z) - DynaEval: Unifying Turn and Dialogue Level Evaluation [60.66883575106898]
We propose DynaEval, a unified automatic evaluation framework.
It is capable of performing turn-level evaluation, but also holistically considers the quality of the entire dialogue.
Experiments show that DynaEval significantly outperforms the state-of-the-art dialogue coherence model.
arXiv Detail & Related papers (2021-06-02T12:23:18Z) - Evaluating Groundedness in Dialogue Systems: The BEGIN Benchmark [29.722504033424382]
Knowledge-grounded dialogue agents are systems designed to conduct a conversation based on externally provided background information, such as a Wikipedia page.
We introduce the Benchmark for Evaluation of Grounded INteraction (BEGIN)
BEGIN consists of 8113 dialogue turns generated by language-model-based dialogue systems, accompanied by humans annotations specifying the relationship between the system's response and the background information.
arXiv Detail & Related papers (2021-04-30T20:17:52Z) - Action State Update Approach to Dialogue Management [16.602804535683553]
We propose the action state update approach (ASU) for utterance interpretation.
Our goal is to interpret referring expressions in user input without a domain-specific natural language understanding component.
With both user-simulated and interactive human evaluations, we show that the ASU approach successfully interprets user utterances in a dialogue system.
arXiv Detail & Related papers (2020-11-09T18:49:41Z) - Rethinking Dialogue State Tracking with Reasoning [76.0991910623001]
This paper proposes to track dialogue states gradually with reasoning over dialogue turns with the help of the back-end data.
Empirical results demonstrate that our method significantly outperforms the state-of-the-art methods by 38.6% in terms of joint belief accuracy for MultiWOZ 2.1.
arXiv Detail & Related papers (2020-05-27T02:05:33Z) - Is Your Goal-Oriented Dialog Model Performing Really Well? Empirical
Analysis of System-wise Evaluation [114.48767388174218]
This paper presents an empirical analysis on different types of dialog systems composed of different modules in different settings.
Our results show that a pipeline dialog system trained using fine-grained supervision signals at different component levels often obtains better performance than the systems that use joint or end-to-end models trained on coarse-grained labels.
arXiv Detail & Related papers (2020-05-15T05:20:06Z) - Dialogue-Based Relation Extraction [53.2896545819799]
We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE.
We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks.
Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings.
arXiv Detail & Related papers (2020-04-17T03:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.