IART: Intent-aware Response Ranking with Transformers in
Information-seeking Conversation Systems
- URL: http://arxiv.org/abs/2002.00571v1
- Date: Mon, 3 Feb 2020 05:59:52 GMT
- Title: IART: Intent-aware Response Ranking with Transformers in
Information-seeking Conversation Systems
- Authors: Liu Yang, Minghui Qiu, Chen Qu, Cen Chen, Jiafeng Guo, Yongfeng Zhang,
W. Bruce Croft, Haiqing Chen
- Abstract summary: We analyze user intent patterns in information-seeking conversations and propose an intent-aware neural response ranking model "IART"
IART is built on top of the integration of user intent modeling and language representation learning with the Transformer architecture.
- Score: 80.0781718687327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personal assistant systems, such as Apple Siri, Google Assistant, Amazon
Alexa, and Microsoft Cortana, are becoming ever more widely used. Understanding
user intent such as clarification questions, potential answers and user
feedback in information-seeking conversations is critical for retrieving good
responses. In this paper, we analyze user intent patterns in
information-seeking conversations and propose an intent-aware neural response
ranking model "IART", which refers to "Intent-Aware Ranking with Transformers".
IART is built on top of the integration of user intent modeling and language
representation learning with the Transformer architecture, which relies
entirely on a self-attention mechanism instead of recurrent nets. It
incorporates intent-aware utterance attention to derive an importance weighting
scheme of utterances in conversation context with the aim of better
conversation history understanding. We conduct extensive experiments with three
information-seeking conversation data sets including both standard benchmarks
and commercial data. Our proposed model outperforms all baseline methods with
respect to a variety of metrics. We also perform case studies and analysis of
learned user intent and its impact on response ranking in information-seeking
conversations to provide interpretation of results.
Related papers
- Backtracing: Retrieving the Cause of the Query [7.715089044732362]
We introduce the task of backtracing, in which systems retrieve the text segment that most likely caused a user query.
We evaluate the zero-shot performance of popular information retrieval methods and language modeling methods.
Our results show that there is room for improvement on backtracing and it requires new retrieval approaches.
arXiv Detail & Related papers (2024-03-06T18:59:02Z) - FCC: Fusing Conversation History and Candidate Provenance for Contextual
Response Ranking in Dialogue Systems [53.89014188309486]
We present a flexible neural framework that can integrate contextual information from multiple channels.
We evaluate our model on the MSDialog dataset widely used for evaluating conversational response ranking tasks.
arXiv Detail & Related papers (2023-03-31T23:58:28Z) - End-to-end Spoken Conversational Question Answering: Task, Dataset and
Model [92.18621726802726]
In spoken question answering, the systems are designed to answer questions from contiguous text spans within the related speech transcripts.
We propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling the systems to model complex dialogue flows.
Our main objective is to build the system to deal with conversational questions based on the audio recordings, and to explore the plausibility of providing more cues from different modalities with systems in information gathering.
arXiv Detail & Related papers (2022-04-29T17:56:59Z) - BERT-CoQAC: BERT-based Conversational Question Answering in Context [10.811729691130349]
We introduce a framework based on a publically available pre-trained language model called BERT for incorporating history turns into the system.
Experiment results revealed that our framework is comparable in performance with the state-of-the-art models on the QuAC leader board.
arXiv Detail & Related papers (2021-04-23T03:05:17Z) - Dialogue History Matters! Personalized Response Selectionin Multi-turn
Retrieval-based Chatbots [62.295373408415365]
We propose a personalized hybrid matching network (PHMN) for context-response matching.
Our contributions are two-fold: 1) our model extracts personalized wording behaviors from user-specific dialogue history as extra matching information.
We evaluate our model on two large datasets with user identification, i.e., personalized dialogue Corpus Ubuntu (P- Ubuntu) and personalized Weibo dataset (P-Weibo)
arXiv Detail & Related papers (2021-03-17T09:42:11Z) - Abstractive Summarization of Spoken and Written Instructions with BERT [66.14755043607776]
We present the first application of the BERTSum model to conversational language.
We generate abstractive summaries of narrated instructional videos across a wide variety of topics.
We envision this integrated as a feature in intelligent virtual assistants, enabling them to summarize both written and spoken instructional content upon request.
arXiv Detail & Related papers (2020-08-21T20:59:34Z) - Multi-Stage Conversational Passage Retrieval: An Approach to Fusing Term
Importance Estimation and Neural Query Rewriting [56.268862325167575]
We tackle conversational passage retrieval (ConvPR) with query reformulation integrated into a multi-stage ad-hoc IR system.
We propose two conversational query reformulation (CQR) methods: (1) term importance estimation and (2) neural query rewriting.
For the former, we expand conversational queries using important terms extracted from the conversational context with frequency-based signals.
For the latter, we reformulate conversational queries into natural, standalone, human-understandable queries with a pretrained sequence-tosequence model.
arXiv Detail & Related papers (2020-05-05T14:30:20Z) - Learning to Rank Intents in Voice Assistants [2.102846336724103]
We propose a novel Energy-based model for the intent ranking task.
We show our approach outperforms existing state of the art methods by reducing the error-rate by 3.8%.
We also evaluate the robustness of our algorithm on the intent ranking task and show our algorithm improves the robustness by 33.3%.
arXiv Detail & Related papers (2020-04-30T21:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.