Document-editing Assistants and Model-based Reinforcement Learning as a
Path to Conversational AI
- URL: http://arxiv.org/abs/2008.12095v1
- Date: Thu, 27 Aug 2020 13:05:51 GMT
- Title: Document-editing Assistants and Model-based Reinforcement Learning as a
Path to Conversational AI
- Authors: Katya Kudashkina, Patrick M. Pilarski, Richard S. Sutton
- Abstract summary: We argue for the domain of voice document editing and for the methods of model-based reinforcement learning.
The advantages of voice document editing are that the domain is tightly scoped and that it provides something for the conversation to be about.
Model-based reinforcement learning is needed in order to genuinely understand the domain of discourse.
- Score: 9.329553018748207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent assistants that follow commands or answer simple questions, such
as Siri and Google search, are among the most economically important
applications of AI. Future conversational AI assistants promise even greater
capabilities and a better user experience through a deeper understanding of the
domain, the user, or the user's purposes. But what domain and what methods are
best suited to researching and realizing this promise? In this article we argue
for the domain of voice document editing and for the methods of model-based
reinforcement learning. The primary advantages of voice document editing are
that the domain is tightly scoped and that it provides something for the
conversation to be about (the document) that is delimited and fully accessible
to the intelligent assistant. The advantages of reinforcement learning in
general are that its methods are designed to learn from interaction without
explicit instruction and that it formalizes the purposes of the assistant.
Model-based reinforcement learning is needed in order to genuinely understand
the domain of discourse and thereby work efficiently with the user to achieve
their goals. Together, voice document editing and model-based reinforcement
learning comprise a promising research direction for achieving conversational
AI.
Related papers
- Distributed agency in second language learning and teaching through generative AI [0.0]
ChatGPT can provide informal second language practice through chats in written or voice forms.
Instructors can use AI to build learning and assessment materials in a variety of media.
arXiv Detail & Related papers (2024-03-29T14:55:40Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - How to Build an AI Tutor that Can Adapt to Any Course and Provide Accurate Answers Using Large Language Model and Retrieval-Augmented Generation [0.0]
The OpenAI Assistants API allows AI Tutor to easily embed, store, retrieve, and manage files and chat history.
The AI Tutor prototype demonstrates its ability to generate relevant, accurate answers with source citations.
arXiv Detail & Related papers (2023-11-29T15:02:46Z) - Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations [70.7884839812069]
Large language models (LLMs) have emerged as powerful and general solutions to many natural language tasks.
However, many of the most important applications of language generation are interactive, where an agent has to talk to a person to reach a desired outcome.
In this work, we explore a new method for adapting LLMs with RL for such goal-directed dialogue.
arXiv Detail & Related papers (2023-11-09T18:45:16Z) - Build-a-Bot: Teaching Conversational AI Using a Transformer-Based Intent
Recognition and Question Answering Architecture [15.19996462016215]
This paper proposes an interface for students to learn the principles of artificial intelligence by using a natural language pipeline to train a customized model to answer questions based on their own school curriculums.
The pipeline teaches students data collection, data augmentation, intent recognition, and question answering by having them work through each of these processes while creating their AI agent.
arXiv Detail & Related papers (2022-12-14T22:57:44Z) - Persona-Based Conversational AI: State of the Art and Challenges [5.7817077975444136]
We explore how persona-based information could help improve the quality of response generation in conversations.
Our study highlights several limitations with current state-of-the-art methods and outlines challenges and future research directions for advancing personalized conversational AI technology.
arXiv Detail & Related papers (2022-12-04T18:16:57Z) - KETOD: Knowledge-Enriched Task-Oriented Dialogue [77.59814785157877]
Existing studies in dialogue system research mostly treat task-oriented dialogue and chit-chat as separate domains.
We investigate how task-oriented dialogue and knowledge-grounded chit-chat can be effectively integrated into a single model.
arXiv Detail & Related papers (2022-05-11T16:01:03Z) - Few-Shot Bot: Prompt-Based Learning for Dialogue Systems [58.27337673451943]
Learning to converse using only a few examples is a great challenge in conversational AI.
The current best conversational models are either good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL)
We propose prompt-based few-shot learning which does not require gradient-based fine-tuning but instead uses a few examples as the only source of learning.
arXiv Detail & Related papers (2021-10-15T14:36:45Z) - NaRLE: Natural Language Models using Reinforcement Learning with Emotion
Feedback [0.37277730514654556]
"NARLE" is a framework for improving the natural language understanding of dialogue systems online without the need to collect human labels for customer data.
For two intent classification problems, we empirically show that using reinforcement learning to fine tune the pre-trained supervised learning models improves performance up to 43%.
arXiv Detail & Related papers (2021-10-05T16:24:19Z) - Rethinking Supervised Learning and Reinforcement Learning in
Task-Oriented Dialogue Systems [58.724629408229205]
We demonstrate how traditional supervised learning and a simulator-free adversarial learning method can be used to achieve performance comparable to state-of-the-art RL-based methods.
Our main goal is not to beat reinforcement learning with supervised learning, but to demonstrate the value of rethinking the role of reinforcement learning and supervised learning in optimizing task-oriented dialogue systems.
arXiv Detail & Related papers (2020-09-21T12:04:18Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.