Agents Thinking Fast and Slow: A Talker-Reasoner Architecture
- URL: http://arxiv.org/abs/2410.08328v1
- Date: Thu, 10 Oct 2024 19:31:35 GMT
- Title: Agents Thinking Fast and Slow: A Talker-Reasoner Architecture
- Authors: Konstantina Christakopoulou, Shibl Mourad, Maja Matarić,
- Abstract summary: Large language models have enabled agents of all kinds to interact with users through natural conversation.
Our approach is comprised of a "Talker" agent that is fast and intuitive, and tasked with synthesizing the conversational response.
We describe the new Talker-Reasoner architecture and discuss its advantages, including modularity and decreased latency.
- Score: 1.7114665201319208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models have enabled agents of all kinds to interact with users through natural conversation. Consequently, agents now have two jobs: conversing and planning/reasoning. Their conversational responses must be informed by all available information, and their actions must help to achieve goals. This dichotomy between conversing with the user and doing multi-step reasoning and planning can be seen as analogous to the human systems of "thinking fast and slow" as introduced by Kahneman. Our approach is comprised of a "Talker" agent (System 1) that is fast and intuitive, and tasked with synthesizing the conversational response; and a "Reasoner" agent (System 2) that is slower, more deliberative, and more logical, and is tasked with multi-step reasoning and planning, calling tools, performing actions in the world, and thereby producing the new agent state. We describe the new Talker-Reasoner architecture and discuss its advantages, including modularity and decreased latency. We ground the discussion in the context of a sleep coaching agent, in order to demonstrate real-world relevance.
Related papers
- ReSpAct: Harmonizing Reasoning, Speaking, and Acting Towards Building Large Language Model-Based Conversational AI Agents [11.118991548784459]
Large language model (LLM)-based agents have been increasingly used to interact with external environments.
Current frameworks do not enable these agents to work with users and interact with them to align on the details of their tasks.
This work introduces ReSpAct, a novel framework that combines the essential skills for building task-oriented "conversational" agents.
arXiv Detail & Related papers (2024-11-01T15:57:45Z) - One Agent Too Many: User Perspectives on Approaches to Multi-agent
Conversational AI [10.825570464035872]
We show that users have a significant preference for abstracting agent orchestration in both system usability and system performance.
We demonstrate that this mode of interaction is able to provide quality responses that are rated within 1% of human-selected answers.
arXiv Detail & Related papers (2024-01-13T17:30:57Z) - On the Discussion of Large Language Models: Symmetry of Agents and
Interplay with Prompts [51.3324922038486]
This paper reports the empirical results of the interplay of prompts and discussion mechanisms.
It also proposes a scalable discussion mechanism based on conquer and merge.
arXiv Detail & Related papers (2023-11-13T04:56:48Z) - DUMA: a Dual-Mind Conversational Agent with Fast and Slow Thinking [12.71072798544731]
DUMA embodies a dual-mind mechanism through the utilization of two generative Large Language Models (LLMs) dedicated to fast and slow thinking respectively.
We have constructed a conversational agent to handle online inquiries in the real estate industry.
arXiv Detail & Related papers (2023-10-27T11:43:46Z) - Generative Agents: Interactive Simulacra of Human Behavior [86.1026716646289]
We introduce generative agents--computational software agents that simulate believable human behavior.
We describe an architecture that extends a large language model to store a complete record of the agent's experiences.
We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims.
arXiv Detail & Related papers (2023-04-07T01:55:19Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - RMM: A Recursive Mental Model for Dialog Navigation [102.42641990401735]
Language-guided robots must be able to both ask humans questions and understand answers.
Inspired by theory of mind, we propose the Recursive Mental Model (RMM)
We demonstrate that RMM enables better generalization to novel environments.
arXiv Detail & Related papers (2020-05-02T06:57:14Z) - "Wait, I'm Still Talking!" Predicting the Dialogue Interaction Behavior
Using Imagine-Then-Arbitrate Model [24.560203199376478]
In real human-human conversations, human often sequentially sends several short messages for readability instead of a long message in one turn.
We propose a novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly.
arXiv Detail & Related papers (2020-02-22T04:05:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.