Multi-User Chat Assistant (MUCA): a Framework Using LLMs to Facilitate
Group Conversations
- URL: http://arxiv.org/abs/2401.04883v3
- Date: Fri, 16 Feb 2024 06:26:01 GMT
- Title: Multi-User Chat Assistant (MUCA): a Framework Using LLMs to Facilitate
Group Conversations
- Authors: Manqing Mao, Paishun Ting, Yijian Xiang, Mingyang Xu, Julia Chen,
Jianzhe Lin
- Abstract summary: Multi-User Chat Assistant (MUCA) is an LLM-based framework for chatbots specifically designed for group discussions.
MUCA demonstrates effectiveness, including appropriate chime-in timing, relevant content, and improving user engagement.
- Score: 3.918004958992473
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent advancements in large language models (LLMs) have provided a new
avenue for chatbot development, while most existing research has primarily
centered on single-user chatbots that focus on deciding "What" to answer after
user inputs. In this paper, we identified that multi-user chatbots have more
complex 3W design dimensions -- "What" to say, "When" to respond, and "Who" to
answer. Additionally, we proposed Multi-User Chat Assistant (MUCA), which is an
LLM-based framework for chatbots specifically designed for group discussions.
MUCA consists of three main modules: Sub-topic Generator, Dialog Analyzer, and
Utterance Strategies Arbitrator. These modules jointly determine suitable
response contents, timings, and the appropriate recipients. To make the
optimizing process for MUCA easier, we further propose an LLM-based Multi-User
Simulator (MUS) that can mimic real user behavior. This enables faster
simulation of a conversation between the chatbot and simulated users, making
the early development of the chatbot framework much more efficient. MUCA
demonstrates effectiveness, including appropriate chime-in timing, relevant
content, and improving user engagement, in group conversations with a small to
medium number of participants, as evidenced by case studies and experimental
results from user studies.
Related papers
- LLM Roleplay: Simulating Human-Chatbot Interaction [52.03241266241294]
LLM-Roleplay is a goal-oriented, persona-based method to automatically generate diverse multi-turn dialogues simulating human-chatbot interaction.
We collect natural human-chatbot dialogues from different sociodemographic groups and conduct a human evaluation to compare real human-chatbot dialogues with our generated dialogues.
arXiv Detail & Related papers (2024-07-04T14:49:46Z) - Measuring and Controlling Instruction (In)Stability in Language Model Dialogs [72.38330196290119]
System-prompting is a tool for customizing language-model chatbots, enabling them to follow a specific instruction.
We propose a benchmark to test the assumption, evaluating instruction stability via self-chats.
We reveal a significant instruction drift within eight rounds of conversations.
We propose a lightweight method called split-softmax, which compares favorably against two strong baselines.
arXiv Detail & Related papers (2024-02-13T20:10:29Z) - Multi-Purpose NLP Chatbot : Design, Methodology & Conclusion [0.0]
This research paper provides a thorough analysis of the chatbots technology environment as it exists today.
It provides a very flexible system that makes use of reinforcement learning strategies to improve user interactions and conversational experiences.
The complexity of chatbots technology development is also explored in this study, along with the causes that have propelled these developments and their far-reaching effects on a range of sectors.
arXiv Detail & Related papers (2023-10-13T09:47:24Z) - MemoChat: Tuning LLMs to Use Memos for Consistent Long-Range Open-Domain
Conversation [43.24092422054248]
We propose a pipeline for refining instructions that enables large language models to effectively employ self-composed memos.
We demonstrate a long-range open-domain conversation through iterative "memorization-retrieval-response" cycles.
The instructions are reconstructed from a collection of public datasets to teach the LLMs to memorize and retrieve past dialogues with structured memos.
arXiv Detail & Related papers (2023-08-16T09:15:18Z) - ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large
Language Models [125.7209927536255]
We propose ChatCoT, a tool-augmented chain-of-thought reasoning framework for chat-based LLMs.
In ChatCoT, we model the chain-of-thought (CoT) reasoning as multi-turn conversations, to utilize tools in a more natural way through chatting.
Our approach can effectively leverage the multi-turn conversation ability of chat-based LLMs, and integrate the thought chain following and tools manipulation in a unified way.
arXiv Detail & Related papers (2023-05-23T17:54:33Z) - Prompted LLMs as Chatbot Modules for Long Open-domain Conversation [7.511596831927614]
We propose MPC, a new approach for creating high-quality conversational agents without the need for fine-tuning.
Our method utilizes pre-trained large language models (LLMs) as individual modules for long-term consistency and flexibility.
arXiv Detail & Related papers (2023-05-08T08:09:00Z) - Rewarding Chatbots for Real-World Engagement with Millions of Users [1.2583983802175422]
This work investigates the development of social chatbots that prioritize user engagement to enhance retention.
The proposed approach uses automatic pseudo-labels collected from user interactions to train a reward model that can be used to reject low-scoring sample responses.
A/B testing on groups of 10,000 new dailychat users on the Chai Research platform shows that this approach increases the MCL by up to 70%.
Future work aims to use the reward model to realise a data fly-wheel, where the latest user conversations can be used to alternately fine-tune the language model and the reward model.
arXiv Detail & Related papers (2023-03-10T18:53:52Z) - Few-Shot Bot: Prompt-Based Learning for Dialogue Systems [58.27337673451943]
Learning to converse using only a few examples is a great challenge in conversational AI.
The current best conversational models are either good chit-chatters (e.g., BlenderBot) or goal-oriented systems (e.g., MinTL)
We propose prompt-based few-shot learning which does not require gradient-based fine-tuning but instead uses a few examples as the only source of learning.
arXiv Detail & Related papers (2021-10-15T14:36:45Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - CASS: Towards Building a Social-Support Chatbot for Online Health
Community [67.45813419121603]
The CASS architecture is based on advanced neural network algorithms.
It can handle new inputs from users and generate a variety of responses to them.
With a follow-up field experiment, CASS is proven useful in supporting individual members who seek emotional support.
arXiv Detail & Related papers (2021-01-04T05:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.