Conversational Health Agents: A Personalized LLM-Powered Agent Framework
- URL: http://arxiv.org/abs/2310.02374v5
- Date: Wed, 25 Sep 2024 04:50:38 GMT
- Title: Conversational Health Agents: A Personalized LLM-Powered Agent Framework
- Authors: Mahyar Abbasian, Iman Azimi, Amir M. Rahmani, Ramesh Jain,
- Abstract summary: Conversational Health Agents (CHAs) are interactive systems that provide healthcare services, such as assistance and diagnosis.
We propose openCHA, an open-source framework to empower conversational agents to generate a personalized response for users' healthcare queries.
openCHA includes an orchestrator to plan and execute actions for gathering information from external sources.
- Score: 1.4597673707346281
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Conversational Health Agents (CHAs) are interactive systems that provide healthcare services, such as assistance and diagnosis. Current CHAs, especially those utilizing Large Language Models (LLMs), primarily focus on conversation aspects. However, they offer limited agent capabilities, specifically lacking multi-step problem-solving, personalized conversations, and multimodal data analysis. Our aim is to overcome these limitations. We propose openCHA, an open-source LLM-powered framework, to empower conversational agents to generate a personalized response for users' healthcare queries. This framework enables developers to integrate external sources including data sources, knowledge bases, and analysis models, into their LLM-based solutions. openCHA includes an orchestrator to plan and execute actions for gathering information from external sources, essential for formulating responses to user inquiries. It facilitates knowledge acquisition, problem-solving capabilities, multilingual and multimodal conversations, and fosters interaction with various AI platforms. We illustrate the framework's proficiency in handling complex healthcare tasks via two demonstrations and four use cases. Moreover, we release openCHA as open source available to the community via GitHub.
Related papers
- Navigating the Unknown: A Chat-Based Collaborative Interface for Personalized Exploratory Tasks [35.09558253658275]
This paper introduces the Collaborative Assistant for Personalized Exploration (CARE)
CARE is a system designed to enhance personalization in exploratory tasks by combining a multi-agent LLM framework with a structured user interface.
Our findings highlight CARE's potential to transform LLM-based systems from passive information retrievers to proactive partners in personalized problem-solving and exploration.
arXiv Detail & Related papers (2024-10-31T15:30:55Z) - Conversational AI Multi-Agent Interoperability, Universal Open APIs for Agentic Natural Language Multimodal Communications [0.0]
This paper analyses Conversational AI multi-agent interoperability frameworks and describes the novel architecture proposed by the Open Voice initiative.
The new approach is illustrated, along with the main components, delineating the key benefits and use cases for deploying standard multi-modal AI agency (or agentic AI) communications.
arXiv Detail & Related papers (2024-07-28T09:33:55Z) - Hello Again! LLM-powered Personalized Agent for Long-term Dialogue [63.65128176360345]
We introduce a model-agnostic framework, the Long-term Dialogue Agent (LD-Agent)
It incorporates three independently tunable modules dedicated to event perception, persona extraction, and response generation.
The effectiveness, generality, and cross-domain capabilities of LD-Agent are empirically demonstrated.
arXiv Detail & Related papers (2024-06-09T21:58:32Z) - Inquire, Interact, and Integrate: A Proactive Agent Collaborative Framework for Zero-Shot Multimodal Medical Reasoning [21.562034852024272]
The adoption of large language models (LLMs) in healthcare has attracted significant research interest.
Most state-of-the-art LLMs are unimodal, text-only models that cannot directly process multimodal inputs.
We propose a multimodal medical collaborative reasoning framework textbfMultiMedRes to solve medical multimodal reasoning problems.
arXiv Detail & Related papers (2024-05-19T18:26:11Z) - Exploring Interaction Patterns for Debugging: Enhancing Conversational
Capabilities of AI-assistants [18.53732314023887]
Large Language Models (LLMs) enable programmers to obtain natural language explanations for various software development tasks.
LLMs often leap to action without sufficient context, giving rise to implicit assumptions and inaccurate responses.
In this paper, we draw inspiration from interaction patterns and conversation analysis -- to design Robin, an enhanced conversational AI-assistant for debug.
arXiv Detail & Related papers (2024-02-09T07:44:27Z) - Large Language Model Enhanced Multi-Agent Systems for 6G Communications [94.45712802626794]
We propose a multi-agent system with customized communication knowledge and tools for solving communication related tasks using natural language.
We validate the effectiveness of the proposed multi-agent system by designing a semantic communication system.
arXiv Detail & Related papers (2023-12-13T02:35:57Z) - Agent Lumos: Unified and Modular Training for Open-Source Language Agents [89.78556964988852]
We introduce LUMOS, one of the first frameworks for training open-source LLM-based agents.
LUMOS features a learnable, unified, and modular architecture with a planning module that learns high-level subgoal generation.
We collect large-scale, unified, and high-quality training annotations derived from diverse ground-truth reasoning rationales.
arXiv Detail & Related papers (2023-11-09T00:30:13Z) - AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation [61.455159391215915]
AutoGen is an open-source framework that allows developers to build LLM applications via multiple agents that can converse with each other to accomplish tasks.
AutoGen agents are customizable, conversable, and can operate in various modes that employ combinations of LLMs, human inputs, and tools.
arXiv Detail & Related papers (2023-08-16T05:57:52Z) - ChatDev: Communicative Agents for Software Development [84.90400377131962]
ChatDev is a chat-powered software development framework in which specialized agents are guided in what to communicate.
These agents actively contribute to the design, coding, and testing phases through unified language-based communication.
arXiv Detail & Related papers (2023-07-16T02:11:34Z) - SMILE: Single-turn to Multi-turn Inclusive Language Expansion via ChatGPT for Mental Health Support [26.443929802292807]
Large-scale, real-life multi-turn conversations could facilitate advancements in mental health support.
We introduce SMILE, a single-turn to multi-turn inclusive language expansion technique.
We generate a large-scale, lifelike, and diverse dialogue dataset named SMILECHAT, consisting of 55k dialogues.
arXiv Detail & Related papers (2023-04-30T11:26:10Z) - CAMEL: Communicative Agents for "Mind" Exploration of Large Language
Model Society [58.04479313658851]
This paper explores the potential of building scalable techniques to facilitate autonomous cooperation among communicative agents.
We propose a novel communicative agent framework named role-playing.
Our contributions include introducing a novel communicative agent framework, offering a scalable approach for studying the cooperative behaviors and capabilities of multi-agent systems.
arXiv Detail & Related papers (2023-03-31T01:09:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.