Conversational Query Rewriting with Self-supervised Learning
- URL: http://arxiv.org/abs/2102.04708v1
- Date: Tue, 9 Feb 2021 08:57:53 GMT
- Title: Conversational Query Rewriting with Self-supervised Learning
- Authors: Hang Liu, Meng Chen, Youzheng Wu, Xiaodong He, Bowen Zhou
- Abstract summary: Conversational Query Rewriting (CQR) aims to simplify the multi-turn dialogue modeling into a single-turn problem by explicitly rewriting the conversational query into a self-contained utterance.
Existing approaches rely on massive supervised training data, which is labor-intensive to annotate.
We propose to construct a large-scale CQR dataset automatically via self-supervised learning, which does not need human annotation.
- Score: 36.392717968127016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Context modeling plays a critical role in building multi-turn dialogue
systems. Conversational Query Rewriting (CQR) aims to simplify the multi-turn
dialogue modeling into a single-turn problem by explicitly rewriting the
conversational query into a self-contained utterance. However, existing
approaches rely on massive supervised training data, which is labor-intensive
to annotate. And the detection of the omitted important information from
context can be further improved. Besides, intent consistency constraint between
contextual query and rewritten query is also ignored. To tackle these issues,
we first propose to construct a large-scale CQR dataset automatically via
self-supervised learning, which does not need human annotation. Then we
introduce a novel CQR model Teresa based on Transformer, which is enhanced by
self-attentive keywords detection and intent consistency constraint. Finally,
we conduct extensive experiments on two public datasets. Experimental results
demonstrate that our proposed model outperforms existing CQR baselines
significantly, and also prove the effectiveness of self-supervised learning on
improving the CQR performance.
Related papers
- AdaCQR: Enhancing Query Reformulation for Conversational Search via Sparse and Dense Retrieval Alignment [16.62505706601199]
We present a novel framework AdaCQR for conversational search reformulation.
By aligning reformulation models with both term-based and semantic-based retrieval systems, AdaCQR enhances the generalizability of information-seeking queries.
Experimental evaluations on the TopiOCQA and QReCC datasets demonstrate that AdaCQR significantly outperforms existing methods.
arXiv Detail & Related papers (2024-07-02T05:50:16Z) - Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation [65.16137964758612]
We explore the use of long-context capabilities in large language models to create synthetic reading comprehension data from entire books.
Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text.
arXiv Detail & Related papers (2024-05-31T20:15:10Z) - Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue [22.103162555263143]
We introduce contrastive learning and multi-task learning to jointly model the problem.
Our proposed model achieves state-of-the-art performance on several public datasets.
arXiv Detail & Related papers (2022-03-22T10:13:27Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z) - CSAGN: Conversational Structure Aware Graph Network for Conversational
Semantic Role Labeling [27.528361001332264]
We present a simple and effective architecture for CSRL which aims to address this problem.
Our model is based on a conversational structure-aware graph network which explicitly encodes the speaker dependent information.
arXiv Detail & Related papers (2021-09-23T07:47:28Z) - Enhancing Dialogue Generation via Multi-Level Contrastive Learning [57.005432249952406]
We propose a multi-level contrastive learning paradigm to model the fine-grained quality of the responses with respect to the query.
A Rank-aware (RC) network is designed to construct the multi-level contrastive optimization objectives.
We build a Knowledge Inference (KI) component to capture the keyword knowledge from the reference during training and exploit such information to encourage the generation of informative words.
arXiv Detail & Related papers (2020-09-19T02:41:04Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z) - MLR: A Two-stage Conversational Query Rewriting Model with Multi-task
Learning [16.88648782206587]
We propose the conversational query rewriting model - MLR, which is a Multi-task model on sequence Labeling and query Rewriting.
MLR reformulates the multi-turn conversational queries into a single turn query, which conveys the true intention of users concisely.
To train our model, we construct a new Chinese query rewriting dataset and conduct experiments on it.
arXiv Detail & Related papers (2020-04-13T08:04:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.