Building chatbots from large scale domain-specific knowledge bases:
challenges and opportunities
- URL: http://arxiv.org/abs/2001.00100v1
- Date: Tue, 31 Dec 2019 22:40:30 GMT
- Title: Building chatbots from large scale domain-specific knowledge bases:
challenges and opportunities
- Authors: Walid Shalaby, Adriano Arantes, Teresa GonzalezDiaz, Chetan Gupta
- Abstract summary: We describe the challenges and lessons learned from building a large scale virtual assistant for understanding and responding to equipment-related complaints.
We show through evaluation on a real dataset that the proposed framework, compared to off-the-shelf popular ones, scales better with large volume of entities being up to 30% more accurate.
- Score: 4.129225533930966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Popular conversational agents frameworks such as Alexa Skills Kit (ASK) and
Google Actions (gActions) offer unprecedented opportunities for facilitating
the development and deployment of voice-enabled AI solutions in various
verticals. Nevertheless, understanding user utterances with high accuracy
remains a challenging task with these frameworks. Particularly, when building
chatbots with large volume of domain-specific entities. In this paper, we
describe the challenges and lessons learned from building a large scale virtual
assistant for understanding and responding to equipment-related complaints. In
the process, we describe an alternative scalable framework for: 1) extracting
the knowledge about equipment components and their associated problem entities
from short texts, and 2) learning to identify such entities in user utterances.
We show through evaluation on a real dataset that the proposed framework,
compared to off-the-shelf popular ones, scales better with large volume of
entities being up to 30% more accurate, and is more effective in understanding
user utterances with domain-specific entities.
Related papers
- OntoChat: a Framework for Conversational Ontology Engineering using Language Models [0.3141085922386211]
We introduce textbfOntoChat, a framework for conversational engineering that supports requirement elicitation, analysis, and testing.
We evaluate OntoChat by replicating the engineering of the Music Meta Ontology, and collecting preliminary metrics on the effectiveness of each component from users.
arXiv Detail & Related papers (2024-03-09T14:04:06Z) - VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks [93.85005277463802]
VisualWebArena is a benchmark designed to assess the performance of multimodal web agents on realistic tasks.
To perform on this benchmark, agents need to accurately process image-text inputs, interpret natural language instructions, and execute actions on websites to accomplish user-defined objectives.
arXiv Detail & Related papers (2024-01-24T18:35:21Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - Beyond the Imitation Game: Quantifying and extrapolating the
capabilities of language models [648.3665819567409]
Language models demonstrate both quantitative improvement and new qualitative capabilities with increasing scale.
Big-bench consists of 204 tasks, contributed by 450 authors across 132 institutions.
We evaluate the behavior of OpenAI's GPT models, Google-internal dense transformer architectures, and Switch-style sparse transformers on BIG-bench.
arXiv Detail & Related papers (2022-06-09T17:05:34Z) - Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue
Systems [109.16553492049441]
We propose a novel method to incorporate the knowledge reasoning capability into dialogue systems in a more scalable and generalizable manner.
To the best of our knowledge, this is the first work to have transformer models generate responses by reasoning over differentiable knowledge graphs.
arXiv Detail & Related papers (2022-03-20T17:51:49Z) - Building a Legal Dialogue System: Development Process, Challenges and
Opportunities [1.433758865948252]
This paper presents key principles and solutions to the challenges faced in designing a domain-specific conversational agent for the legal domain.
It provides functionality in answering user queries and recording user information including contact details and case-related information.
arXiv Detail & Related papers (2021-09-01T13:35:42Z) - Can I Be of Further Assistance? Using Unstructured Knowledge Access to
Improve Task-oriented Conversational Modeling [39.60614611655266]
This work focuses on responding to these beyond-API-coverage user turns by incorporating external, unstructured knowledge sources.
We introduce novel data augmentation methods for the first two steps and demonstrate that the use of information extracted from dialogue context improves the knowledge selection and end-to-end performances.
arXiv Detail & Related papers (2021-06-16T23:31:42Z) - Beyond Domain APIs: Task-oriented Conversational Modeling with
Unstructured Knowledge Access Track in DSTC9 [21.181446816074704]
This challenge track aims to expand the coverage of task-oriented dialogue systems by incorporating external unstructured knowledge sources.
We define three tasks: knowledge-seeking turn detection, knowledge selection, and knowledge-grounded response generation.
arXiv Detail & Related papers (2021-01-22T18:57:56Z) - Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension [49.92173751203827]
In multi-turn dialog, utterances do not always take the full form of sentences.
We propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question.
arXiv Detail & Related papers (2020-12-14T10:58:01Z) - ConCET: Entity-Aware Topic Classification for Open-Domain Conversational
Agents [9.870634472479571]
We introduce ConCET: a Concurrent Entity-aware conversational Topic classifier.
We propose a simple and effective method for generating synthetic training data.
We evaluate ConCET on a large dataset of human-machine conversations with real users, collected as part of the Amazon Alexa Prize.
arXiv Detail & Related papers (2020-05-28T06:29:08Z) - IART: Intent-aware Response Ranking with Transformers in
Information-seeking Conversation Systems [80.0781718687327]
We analyze user intent patterns in information-seeking conversations and propose an intent-aware neural response ranking model "IART"
IART is built on top of the integration of user intent modeling and language representation learning with the Transformer architecture.
arXiv Detail & Related papers (2020-02-03T05:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.