"Kya family planning after marriage hoti hai?": Integrating Cultural Sensitivity in an LLM Chatbot for Reproductive Health
- URL: http://arxiv.org/abs/2502.15939v1
- Date: Fri, 21 Feb 2025 21:08:50 GMT
- Title: "Kya family planning after marriage hoti hai?": Integrating Cultural Sensitivity in an LLM Chatbot for Reproductive Health
- Authors: Roshini Deva, Dhruv Ramani, Tanvi Divate, Suhani Jalota, Azra Ismail,
- Abstract summary: We study the development of a culturally-appropriate chatbots for reproductive health with underserved women in urban India.<n>Our findings reveal strengths and limitations of the system in capturing local context, and complexities around what constitutes "culture"
- Score: 7.827965302875148
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Access to sexual and reproductive health information remains a challenge in many communities globally, due to cultural taboos and limited availability of healthcare providers. Public health organizations are increasingly turning to Large Language Models (LLMs) to improve access to timely and personalized information. However, recent HCI scholarship indicates that significant challenges remain in incorporating context awareness and mitigating bias in LLMs. In this paper, we study the development of a culturally-appropriate LLM-based chatbot for reproductive health with underserved women in urban India. Through user interactions, focus groups, and interviews with multiple stakeholders, we examine the chatbot's response to sensitive and highly contextual queries on reproductive health. Our findings reveal strengths and limitations of the system in capturing local context, and complexities around what constitutes "culture". Finally, we discuss how local context might be better integrated, and present a framework to inform the design of culturally-sensitive chatbots for community health.
Related papers
- Cultural Learning-Based Culture Adaptation of Language Models [70.1063219524999]
Adapting large language models (LLMs) to diverse cultural values is a challenging task.
We present CLCA, a novel framework for enhancing LLM alignment with cultural values based on cultural learning.
arXiv Detail & Related papers (2025-04-03T18:16:26Z) - Exploring Socio-Cultural Challenges and Opportunities in Designing Mental Health Chatbots for Adolescents in India [5.511657284487823]
Mental health challenges among Indian adolescents are shaped by unique cultural and systemic barriers.
This study explores how adolescents perceive mental health challenges and interact with digital tools.
arXiv Detail & Related papers (2025-03-11T15:52:05Z) - "Who Has the Time?": Understanding Receptivity to Health Chatbots among Underserved Women in India [9.808660035903777]
We conducted interviews and focus group discussions with underserved women in urban India.<n>Our findings uncover gaps in digital access and realities, and perceived conflict with various responsibilities that women are burdened with.
arXiv Detail & Related papers (2025-02-21T22:27:38Z) - Leveraging Retrieval-Augmented Generation for Culturally Inclusive Hakka Chatbots: Design Insights and User Perceptions [4.388667614435888]
This study introduces a groundbreaking approach to promoting and safeguarding the rich heritage of Taiwanese Hakka culture.
By integrating external databases with generative AI models, RAG technology bridges this gap.
This is particularly significant in an age where digital platforms often dilute cultural identities.
arXiv Detail & Related papers (2024-10-21T01:36:08Z) - SS-GEN: A Social Story Generation Framework with Large Language Models [87.11067593512716]
Children with Autism Spectrum Disorder (ASD) often misunderstand social situations and struggle to participate in daily routines.
Social Stories are traditionally crafted by psychology experts under strict constraints to address these challenges.
We propose textbfSS-GEN, a framework to generate Social Stories in real-time with broad coverage.
arXiv Detail & Related papers (2024-06-22T00:14:48Z) - Leveraging Large Language Models for Patient Engagement: The Power of Conversational AI in Digital Health [1.8772687384996551]
Large language models (LLMs) have opened up new opportunities for transforming patient engagement in healthcare through conversational AI.
We showcase the power of LLMs in handling unstructured conversational data through four case studies.
arXiv Detail & Related papers (2024-06-19T16:02:04Z) - CIVICS: Building a Dataset for Examining Culturally-Informed Values in Large Language Models [59.22460740026037]
"CIVICS: Culturally-Informed & Values-Inclusive Corpus for Societal impacts" dataset is designed to evaluate the social and cultural variation of Large Language Models (LLMs)
We create a hand-crafted, multilingual dataset of value-laden prompts which address specific socially sensitive topics, including LGBTQI rights, social welfare, immigration, disability rights, and surrogacy.
arXiv Detail & Related papers (2024-05-22T20:19:10Z) - Understanding the Capabilities and Limitations of Large Language Models for Cultural Commonsense [98.09670425244462]
Large language models (LLMs) have demonstrated substantial commonsense understanding.
This paper examines the capabilities and limitations of several state-of-the-art LLMs in the context of cultural commonsense tasks.
arXiv Detail & Related papers (2024-05-07T20:28:34Z) - SUKHSANDESH: An Avatar Therapeutic Question Answering Platform for Sexual Education in Rural India [16.8154824364057]
In countries like India, where adolescents form the largest demographic group, they face significant vulnerabilities concerning sexual health.
Our proposal aims to provide a safe and trustworthy platform for sexual education to the vulnerable rural Indian population.
By utilizing information retrieval techniques and large language models, SUKHSANDESH will deliver effective responses to user queries.
arXiv Detail & Related papers (2024-05-03T05:19:09Z) - CulturalTeaming: AI-Assisted Interactive Red-Teaming for Challenging LLMs' (Lack of) Multicultural Knowledge [69.82940934994333]
We introduce CulturalTeaming, an interactive red-teaming system that leverages human-AI collaboration to build challenging evaluation dataset.
Our study reveals that CulturalTeaming's various modes of AI assistance support annotators in creating cultural questions.
CULTURALBENCH-V0.1 is a compact yet high-quality evaluation dataset with users' red-teaming attempts.
arXiv Detail & Related papers (2024-04-10T00:25:09Z) - Understanding the Impact of Long-Term Memory on Self-Disclosure with
Large Language Model-Driven Chatbots for Public Health Intervention [15.430380965922325]
Large language models (LLMs) offer the potential to support public health monitoring by facilitating health disclosure through open-ended conversations.
Augmenting LLMs with long-term memory (LTM) presents an opportunity to improve engagement and self-disclosure.
arXiv Detail & Related papers (2024-02-17T18:05:53Z) - Massively Multi-Cultural Knowledge Acquisition & LM Benchmarking [48.21982147529661]
This paper introduces a novel approach for massively multicultural knowledge acquisition.
Our method strategically navigates from densely informative Wikipedia documents on cultural topics to an extensive network of linked pages.
Our work marks an important step towards deeper understanding and bridging the gaps of cultural disparities in AI.
arXiv Detail & Related papers (2024-02-14T18:16:54Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.