Do Response Selection Models Really Know What's Next? Utterance
Manipulation Strategies for Multi-turn Response Selection
- URL: http://arxiv.org/abs/2009.04703v2
- Date: Wed, 16 Dec 2020 11:28:20 GMT
- Title: Do Response Selection Models Really Know What's Next? Utterance
Manipulation Strategies for Multi-turn Response Selection
- Authors: Taesun Whang, Dongyub Lee, Dongsuk Oh, Chanhee Lee, Kijong Han,
Dong-hun Lee, Saebyeok Lee
- Abstract summary: We study the task of selecting the optimal response given a user and system utterance history in retrieval-based dialog systems.
We propose utterance manipulation strategies (UMS) to address this problem.
UMS consist of several strategies (i.e., insertion, deletion, and search) which aid the response selection model towards maintaining dialog coherence.
- Score: 11.465266718370536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the task of selecting the optimal response given a
user and system utterance history in retrieval-based multi-turn dialog systems.
Recently, pre-trained language models (e.g., BERT, RoBERTa, and ELECTRA) showed
significant improvements in various natural language processing tasks. This and
similar response selection tasks can also be solved using such language models
by formulating the tasks as dialog--response binary classification tasks.
Although existing works using this approach successfully obtained
state-of-the-art results, we observe that language models trained in this
manner tend to make predictions based on the relatedness of history and
candidates, ignoring the sequential nature of multi-turn dialog systems. This
suggests that the response selection task alone is insufficient for learning
temporal dependencies between utterances. To this end, we propose utterance
manipulation strategies (UMS) to address this problem. Specifically, UMS
consist of several strategies (i.e., insertion, deletion, and search), which
aid the response selection model towards maintaining dialog coherence. Further,
UMS are self-supervised methods that do not require additional annotation and
thus can be easily incorporated into existing approaches. Extensive evaluation
across multiple languages and models shows that UMS are highly effective in
teaching dialog consistency, which leads to models pushing the state-of-the-art
with significant margins on multiple public benchmark datasets.
Related papers
- Do LLMs suffer from Multi-Party Hangover? A Diagnostic Approach to Addressee Recognition and Response Selection in Conversations [11.566214724241798]
We propose a methodological pipeline to investigate model performance across specific structural attributes of conversations.
We focus on Response Selection and Addressee Recognition tasks, to diagnose model weaknesses.
Results show that response selection relies more on the textual content of conversations, while addressee recognition requires capturing their structural dimension.
arXiv Detail & Related papers (2024-09-27T10:07:33Z) - UniMS-RAG: A Unified Multi-source Retrieval-Augmented Generation for Personalized Dialogue Systems [43.266153244137215]
Large Language Models (LLMs) has shown exceptional capabilities in many natual language understanding and generation tasks.
We decompose the use of multiple sources in generating personalized response into three sub-tasks: Knowledge Source Selection, Knowledge Retrieval, and Response Generation.
We propose a novel Unified Multi-Source Retrieval-Augmented Generation system (UniMS-RAG)
arXiv Detail & Related papers (2024-01-24T06:50:20Z) - DialCLIP: Empowering CLIP as Multi-Modal Dialog Retriever [83.33209603041013]
We propose a parameter-efficient prompt-tuning method named DialCLIP for multi-modal dialog retrieval.
Our approach introduces a multi-modal context generator to learn context features which are distilled into prompts within the pre-trained vision-language model CLIP.
To facilitate various types of retrieval, we also design multiple experts to learn mappings from CLIP outputs to multi-modal representation space.
arXiv Detail & Related papers (2024-01-02T07:40:12Z) - JoTR: A Joint Transformer and Reinforcement Learning Framework for
Dialog Policy Learning [53.83063435640911]
Dialogue policy learning (DPL) is a crucial component of dialogue modelling.
We introduce a novel framework, JoTR, to generate flexible dialogue actions.
Unlike traditional methods, JoTR formulates a word-level policy that allows for a more dynamic and adaptable dialogue action generation.
arXiv Detail & Related papers (2023-09-01T03:19:53Z) - Stabilized In-Context Learning with Pre-trained Language Models for Few
Shot Dialogue State Tracking [57.92608483099916]
Large pre-trained language models (PLMs) have shown impressive unaided performance across many NLP tasks.
For more complex tasks such as dialogue state tracking (DST), designing prompts that reliably convey the desired intent is nontrivial.
We introduce a saliency model to limit dialogue text length, allowing us to include more exemplars per query.
arXiv Detail & Related papers (2023-02-12T15:05:10Z) - Context-Aware Language Modeling for Goal-Oriented Dialogue Systems [84.65707332816353]
We formulate goal-oriented dialogue as a partially observed Markov decision process.
We derive a simple and effective method to finetune language models in a goal-aware way.
We evaluate our method on a practical flight-booking task using AirDialogue.
arXiv Detail & Related papers (2022-04-18T17:23:11Z) - Small Changes Make Big Differences: Improving Multi-turn Response
Selection \\in Dialogue Systems via Fine-Grained Contrastive Learning [27.914380392295815]
Retrieve-based dialogue response selection aims to find a proper response from a candidate set given a multi-turn context.
We propose a novel textbfFine-textbfGrained textbfContrastive (FGC) learning method for the response selection task based on PLMs.
arXiv Detail & Related papers (2021-11-19T11:07:07Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based
Chatbots [47.40380290055558]
A new model, named Speaker-Aware BERT (SA-BERT), is proposed to make the model aware of the speaker change information.
A speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues.
arXiv Detail & Related papers (2020-04-07T02:08:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.