LunaAI: A Polite and Fair Healthcare Guidance Chatbot
- URL: http://arxiv.org/abs/2602.18444v1
- Date: Mon, 12 Jan 2026 13:44:00 GMT
- Title: LunaAI: A Polite and Fair Healthcare Guidance Chatbot
- Authors: Yuvarani Ganesan, Salsabila Harlen, Azfar Rahman Bin Fazul Rahman, Akashdeep Singh, Zahra Fathanah, Raja Jamilah Raja Yusof,
- Abstract summary: Many existing systems fall short in emotional intelligence, fairness, and politeness, which are essential for building patient trust.<n>This study addresses the challenge of integrating ethical communication principles by designing and evaluating LunaAI, a healthcare prototype.
- Score: 0.7696728525672148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conversational AI has significant potential in the healthcare sector, but many existing systems fall short in emotional intelligence, fairness, and politeness, which are essential for building patient trust. This gap reduces the effectiveness of digital health solutions and can increase user anxiety. This study addresses the challenge of integrating ethical communication principles by designing and evaluating LunaAI, a healthcare chatbot prototype. Using a user-centered design approach informed by a structured literature review, we developed conversational scenarios that handle both routine and hostile user interactions. The system was implemented using the Google Gemini API and deployed as a mobile-first Progressive Web App built with React, Vite, and Firebase. Preliminary user testing was conducted with a small participant group, and responses were evaluated using established frameworks such as the Godspeed Questionnaire. In addition, a comparative analysis was performed between LunaAI's tailored responses and the baseline outputs of an uncustomized large language model. The results indicate measurable improvements in key interaction qualities, with average user ratings of 4.7 out of 5 for politeness and 4.9 out of 5 for fairness. These findings highlight the importance of intentional ethical conversational design for human-computer interaction, particularly in sensitive healthcare contexts.
Related papers
- VERA-MH: Reliability and Validity of an Open-Source AI Safety Evaluation in Mental Health [0.0]
The Validation of Ethical and Responsible AI in Mental Health (VERA-MH) evaluation was recently proposed to meet the urgent need for an evidence-based, automated safety benchmark.<n>This study aimed to examine the clinical validity and reliability of VERA-MH for evaluating AI safety in suicide risk detection and response.
arXiv Detail & Related papers (2026-02-04T22:17:04Z) - VERA-MH Concept Paper [0.0]
We introduce VERA-MH, an automated evaluation of the safety of AI chatbots used in mental health contexts.<n>To fully automate the process, we used two ancillary AI agents.<n>Simulated conversations are then passed to a judge-agent who scores them based on the rubric.
arXiv Detail & Related papers (2025-10-17T04:07:29Z) - HumAIne-Chatbot: Real-Time Personalized Conversational AI via Reinforcement Learning [0.4931504898146351]
textbfHumAIne-chatbot is an AI-driven conversational agent that personalizes responses through a novel user profiling framework.<n>During live interactions, an online reinforcement learning agent refines per-user models by combining implicit signals.<n>Results show consistent improvements in user satisfaction, personalization accuracy, and task achievement when personalization features were enabled.
arXiv Detail & Related papers (2025-09-04T15:16:38Z) - CoCoNUTS: Concentrating on Content while Neglecting Uninformative Textual Styles for AI-Generated Peer Review Detection [60.52240468810558]
We introduce CoCoNUTS, a content-oriented benchmark built upon a fine-grained dataset of AI-generated peer reviews.<n>We also develop CoCoDet, an AI review detector via a multi-task learning framework, to achieve more accurate and robust detection of AI involvement in review content.
arXiv Detail & Related papers (2025-08-28T06:03:11Z) - Mentalic Net: Development of RAG-based Conversational AI and Evaluation Framework for Mental Health Support [0.0]
Mentalic Net Conversational AI has a BERT Score of 0.898, with other evaluation metrics falling within satisfactory ranges.<n>We advocate for a human-in-the-loop approach and a long-term, responsible strategy in developing such transformative technologies.
arXiv Detail & Related papers (2025-08-27T03:44:56Z) - From Interaction to Collaboration: How Hybrid Intelligence Enhances Chatbot Feedback [0.0]
This study explores the impact of two distinct narratives and feedback collection mechanisms on user engagement and feedback behavior.<n>Initial findings indicate that while small-scale survey measures allowed for no significant differences in user willingness to leave feedback, use the system, or trust the system, participants exposed to the HI narrative statistically significantly provided more detailed feedback.
arXiv Detail & Related papers (2025-03-08T07:36:36Z) - Mind the Gap! Static and Interactive Evaluations of Large Audio Models [55.87220295533817]
Large Audio Models (LAMs) are designed to power voice-native experiences.<n>This study introduces an interactive approach to evaluate LAMs and collect 7,500 LAM interactions from 484 participants.
arXiv Detail & Related papers (2025-02-21T20:29:02Z) - Ask Patients with Patience: Enabling LLMs for Human-Centric Medical Dialogue with Grounded Reasoning [25.068780967617485]
Large language models (LLMs) offer a potential solution but struggle in real-world clinical interactions.<n>We propose Ask Patients with Patience (APP), a multi-turn LLM-based medical assistant designed for grounded reasoning, transparent diagnoses, and human-centric interaction.<n>APP enhances communication by eliciting user symptoms through empathetic dialogue, significantly improving accessibility and user engagement.
arXiv Detail & Related papers (2025-02-11T00:13:52Z) - On the Robustness of ChatGPT: An Adversarial and Out-of-distribution
Perspective [67.98821225810204]
We evaluate the robustness of ChatGPT from the adversarial and out-of-distribution perspective.
Results show consistent advantages on most adversarial and OOD classification and translation tasks.
ChatGPT shows astounding performance in understanding dialogue-related texts.
arXiv Detail & Related papers (2023-02-22T11:01:20Z) - Evaluating Human-Language Model Interaction [79.33022878034627]
We develop a new framework, Human-AI Language-based Interaction Evaluation (HALIE), that defines the components of interactive systems.
We design five tasks to cover different forms of interaction: social dialogue, question answering, crossword puzzles, summarization, and metaphor generation.
We find that better non-interactive performance does not always translate to better human-LM interaction.
arXiv Detail & Related papers (2022-12-19T18:59:45Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.