Lifelong Learning Dialogue Systems: Chatbots that Self-Learn On the Job
- URL: http://arxiv.org/abs/2009.10750v2
- Date: Wed, 24 Feb 2021 00:10:21 GMT
- Title: Lifelong Learning Dialogue Systems: Chatbots that Self-Learn On the Job
- Authors: Bing Liu, Sahisnu Mazumder
- Abstract summary: We propose to endowing the system the ability to continually learn new world knowledge.
We exploit the multi-user environment of such systems to self-learn through interactions with users via verb and non-verb means.
- Score: 21.87382385938692
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dialogue systems, also called chatbots, are now used in a wide range of
applications. However, they still have some major weaknesses. One key weakness
is that they are typically trained from manually-labeled data and/or written
with handcrafted rules, and their knowledge bases (KBs) are also compiled by
human experts. Due to the huge amount of manual effort involved, they are
difficult to scale and also tend to produce many errors ought to their limited
ability to understand natural language and the limited knowledge in their KBs.
Thus, the level of user satisfactory is often low. In this paper, we propose to
dramatically improve this situation by endowing the system the ability to
continually learn (1) new world knowledge, (2) new language expressions to
ground them to actions, and (3) new conversational skills, during conversation
or "on the job" by themselves so that as the systems chat more and more with
users, they become more and more knowledgeable and are better and better able
to understand diverse natural language expressions and improve their
conversational skills. A key approach to achieving these is to exploit the
multi-user environment of such systems to self-learn through interactions with
users via verb and non-verb means. The paper discusses not only key challenges
and promising directions to learn from users during conversation but also how
to ensure the correctness of the learned knowledge.
Related papers
- Think Before You Speak: Cultivating Communication Skills of Large Language Models via Inner Monologue [73.69510478736483]
Large language models (LLMs) can generate fluent, coherent, and diverse responses.
However, they lack a crucial ability: communication skills.
This article aims to empower LLMs with communication skills through inner monologues.
Experimental results show that the proposed CSIM strategy improves the backbone models and outperforms the baselines.
arXiv Detail & Related papers (2023-11-13T16:19:42Z) - VAL: Interactive Task Learning with GPT Dialog Parsing [2.6207405455197827]
Large language models (LLMs) are resistant to brittleness but are not interpretable and cannot learn incrementally.
We present VAL, an ITL system with a new philosophy for LLM/symbolic integration.
We studied users' interactions with VAL in a video game setting, finding that most users could successfully teach VAL using language they felt was natural.
arXiv Detail & Related papers (2023-10-02T20:45:41Z) - Lifelong and Continual Learning Dialogue Systems [14.965054800464259]
Book introduces the new paradigm of lifelong learning dialogue systems.
As the systems chat more and more with users or learn more from external sources, they become more knowledgeable and better at conversing.
arXiv Detail & Related papers (2022-11-12T02:39:41Z) - Using Chatbots to Teach Languages [43.866863322607216]
Our system can adapt to users' language proficiency on the fly.
We provide automatic grammar error feedback to help users learn from their mistakes.
Our next step is to make our system more adaptive to user profile information by using reinforcement learning algorithms.
arXiv Detail & Related papers (2022-07-31T07:01:35Z) - Do As I Can, Not As I Say: Grounding Language in Robotic Affordances [119.29555551279155]
Large language models can encode a wealth of semantic knowledge about the world.
Such knowledge could be extremely useful to robots aiming to act upon high-level, temporally extended instructions expressed in natural language.
We show how low-level skills can be combined with large language models so that the language model provides high-level knowledge about the procedures for performing complex and temporally-extended instructions.
arXiv Detail & Related papers (2022-04-04T17:57:11Z) - Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue
Systems [109.16553492049441]
We propose a novel method to incorporate the knowledge reasoning capability into dialogue systems in a more scalable and generalizable manner.
To the best of our knowledge, this is the first work to have transformer models generate responses by reasoning over differentiable knowledge graphs.
arXiv Detail & Related papers (2022-03-20T17:51:49Z) - LISA: Learning Interpretable Skill Abstractions from Language [85.20587800593293]
We propose a hierarchical imitation learning framework that can learn diverse, interpretable skills from language-conditioned demonstrations.
Our method demonstrates a more natural way to condition on language in sequential decision-making problems.
arXiv Detail & Related papers (2022-02-28T19:43:24Z) - Few-Shot Bot: Prompt-Based Learning for Dialogue Systems [58.27337673451943]
Learning to converse using only a few examples is a great challenge in conversational AI.
The current best conversational models are either good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL)
We propose prompt-based few-shot learning which does not require gradient-based fine-tuning but instead uses a few examples as the only source of learning.
arXiv Detail & Related papers (2021-10-15T14:36:45Z) - Lifelong Knowledge Learning in Rule-based Dialogue Systems [10.229787631112742]
This paper proposes to build such a learning capability in a rule-based chatbots so that it can continuously acquire new knowledge in its chatting with users.
This work is useful because many real-life deployed chatbots are rule-based.
arXiv Detail & Related papers (2020-11-19T13:33:12Z) - Learning Adaptive Language Interfaces through Decomposition [89.21937539950966]
We introduce a neural semantic parsing system that learns new high-level abstractions through decomposition.
Users interactively teach the system by breaking down high-level utterances describing novel behavior into low-level steps.
arXiv Detail & Related papers (2020-10-11T08:27:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.