ScopeIt: Scoping Task Relevant Sentences in Documents
- URL: http://arxiv.org/abs/2003.04988v2
- Date: Sun, 15 Nov 2020 09:44:46 GMT
- Title: ScopeIt: Scoping Task Relevant Sentences in Documents
- Authors: Vishwas Suryanarayanan, Barun Patra, Pamela Bhattacharya, Chala Fufa,
Charles Lee
- Abstract summary: We present a neural model for scoping relevant information for the agent from a large query.
We show that when used as a preprocessing step, the model improves performance of both intent detection and entity extraction tasks.
- Score: 2.047424180164312
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent assistants like Cortana, Siri, Alexa, and Google Assistant are
trained to parse information when the conversation is synchronous and short;
however, for email-based conversational agents, the communication is
asynchronous, and often contains information irrelevant to the assistant. This
makes it harder for the system to accurately detect intents, extract entities
relevant to those intents and thereby perform the desired action. We present a
neural model for scoping relevant information for the agent from a large query.
We show that when used as a preprocessing step, the model improves performance
of both intent detection and entity extraction tasks. We demonstrate the
model's impact on Scheduler (Cortana is the persona of the agent, while
Scheduler is the name of the service. We use them interchangeably in the
context of this paper.) - a virtual conversational meeting scheduling assistant
that interacts asynchronously with users through email. The model helps the
entity extraction and intent detection tasks requisite by Scheduler achieve an
average gain of 35% in precision without any drop in recall. Additionally, we
demonstrate that the same approach can be used for component level analysis in
large documents, such as signature block identification.
Related papers
- Text-Based Detection of On-Hold Scripts in Contact Center Calls [0.6138671548064356]
Average hold time is a concern for call centers because it affects customer satisfaction.
This study presents a natural language processing model that detects on-hold phrases in customer service calls transcribed by automatic speech recognition technology.
arXiv Detail & Related papers (2024-07-13T11:11:41Z) - Adapting Task-Oriented Dialogue Models for Email Conversations [4.45709593827781]
In this paper, we provide an effective transfer learning framework (EMToD) that allows the latest development in dialogue models to be adapted for long-form conversations.
We show that the proposed EMToD framework improves intent detection performance over pre-trained language models by 45% and over pre-trained dialogue models by 30% for task-oriented email conversations.
arXiv Detail & Related papers (2022-08-19T16:41:34Z) - Generative Conversational Networks [67.13144697969501]
We propose a framework called Generative Conversational Networks, in which conversational agents learn to generate their own labelled training data.
We show an average improvement of 35% in intent detection and 21% in slot tagging over a baseline model trained from the seed data.
arXiv Detail & Related papers (2021-06-15T23:19:37Z) - R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching [58.72111690643359]
We propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching.
We first employ BERT to encode the input sentences from a global perspective.
Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective.
To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task.
arXiv Detail & Related papers (2020-12-16T13:11:30Z) - Learning to Match Jobs with Resumes from Sparse Interaction Data using
Multi-View Co-Teaching Network [83.64416937454801]
Job-resume interaction data is sparse and noisy, which affects the performance of job-resume match algorithms.
We propose a novel multi-view co-teaching network from sparse interaction data for job-resume matching.
Our model is able to outperform state-of-the-art methods for job-resume matching.
arXiv Detail & Related papers (2020-09-25T03:09:54Z) - Query Understanding via Intent Description Generation [75.64800976586771]
We propose a novel Query-to-Intent-Description (Q2ID) task for query understanding.
Unlike existing ranking tasks which leverage the query and its description to compute the relevance of documents, Q2ID is a reverse task which aims to generate a natural language intent description.
We demonstrate the effectiveness of our model by comparing with several state-of-the-art generation models on the Q2ID task.
arXiv Detail & Related papers (2020-08-25T08:56:40Z) - Learning with Weak Supervision for Email Intent Detection [56.71599262462638]
We propose to leverage user actions as a source of weak supervision to detect intents in emails.
We develop an end-to-end robust deep neural network model for email intent identification.
arXiv Detail & Related papers (2020-05-26T23:41:05Z) - Intent Mining from past conversations for conversational agent [1.9754522186574608]
Bots are increasingly being deployed to provide round-the-clock support and to increase customer engagement.
Many of the commercial bot building frameworks follow a standard approach that requires one to build and train an intent model to recognize a user input.
We have introduced a novel density-based clustering algorithm ITERDB-LabelSCAN for unbalanced data clustering.
arXiv Detail & Related papers (2020-05-22T05:29:13Z) - IART: Intent-aware Response Ranking with Transformers in
Information-seeking Conversation Systems [80.0781718687327]
We analyze user intent patterns in information-seeking conversations and propose an intent-aware neural response ranking model "IART"
IART is built on top of the integration of user intent modeling and language representation learning with the Transformer architecture.
arXiv Detail & Related papers (2020-02-03T05:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.