SUKHSANDESH: An Avatar Therapeutic Question Answering Platform for Sexual Education in Rural India
- URL: http://arxiv.org/abs/2405.01858v1
- Date: Fri, 3 May 2024 05:19:09 GMT
- Title: SUKHSANDESH: An Avatar Therapeutic Question Answering Platform for Sexual Education in Rural India
- Authors: Salam Michael Singh, Shubhmoy Kumar Garg, Amitesh Misra, Aaditeshwar Seth, Tanmoy Chakraborty,
- Abstract summary: In countries like India, where adolescents form the largest demographic group, they face significant vulnerabilities concerning sexual health.
Our proposal aims to provide a safe and trustworthy platform for sexual education to the vulnerable rural Indian population.
By utilizing information retrieval techniques and large language models, SUKHSANDESH will deliver effective responses to user queries.
- Score: 16.8154824364057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sexual education aims to foster a healthy lifestyle in terms of emotional, mental and social well-being. In countries like India, where adolescents form the largest demographic group, they face significant vulnerabilities concerning sexual health. Unfortunately, sexual education is often stigmatized, creating barriers to providing essential counseling and information to this at-risk population. Consequently, issues such as early pregnancy, unsafe abortions, sexually transmitted infections, and sexual violence become prevalent. Our current proposal aims to provide a safe and trustworthy platform for sexual education to the vulnerable rural Indian population, thereby fostering the healthy and overall growth of the nation. In this regard, we strive towards designing SUKHSANDESH, a multi-staged AI-based Question Answering platform for sexual education tailored to rural India, adhering to safety guardrails and regional language support. By utilizing information retrieval techniques and large language models, SUKHSANDESH will deliver effective responses to user queries. We also propose to anonymise the dataset to mitigate safety measures and set AI guardrails against any harmful or unwanted response generation. Moreover, an innovative feature of our proposal involves integrating ``avatar therapy'' with SUKHSANDESH. This feature will convert AI-generated responses into real-time audio delivered by an animated avatar speaking regional Indian languages. This approach aims to foster empathy and connection, which is particularly beneficial for individuals with limited literacy skills. Partnering with Gram Vaani, an industry leader, we will deploy SUKHSANDESH to address sexual education needs in rural India.
Related papers
- Toward Safe Evolution of Artificial Intelligence (AI) based Conversational Agents to Support Adolescent Mental and Sexual Health Knowledge Discovery [0.22530496464901104]
We discuss the current landscape and opportunities for Conversation Agents (CAs) to support adolescents' mental and sexual health knowledge discovery.
We call for a discourse on how to set guardrails for the safe evolution of AI-based CAs for adolescents.
arXiv Detail & Related papers (2024-04-03T19:18:25Z) - Revitalizing Sex Education for Chinese Children: A Formative Study [3.3525544202498656]
School-based sex education for Chinese children was currently insufficient and restrictive.
Involving parents in sex education posed several challenges, such as a lack of sexuality and pedagogy knowledge.
Culture and politics were major hurdles to effective sex education.
arXiv Detail & Related papers (2024-01-25T10:26:48Z) - The Uli Dataset: An Exercise in Experience Led Annotation of oGBV [3.1060730586569427]
We present a dataset on gendered abuse in three languages- Hindi, Tamil and Indian English.
The dataset comprises of tweets annotated along three questions pertaining to the experience of gender abuse, by experts who identify as women or a member of the LGBTQIA community in South Asia.
arXiv Detail & Related papers (2023-11-15T16:30:44Z) - Factuality Challenges in the Era of Large Language Models [113.3282633305118]
Large Language Models (LLMs) generate false, erroneous, or misleading content.
LLMs can be exploited for malicious applications.
This poses a significant challenge to society in terms of the potential deception of users.
arXiv Detail & Related papers (2023-10-08T14:55:02Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - Moments in the Production of Space: Developing a Generic Adolescent
Girls and Young Women Health Information Systems in Zimbabwe [0.0]
This study follows a project to develop a generic health information systems solution.
It provides a means to monitor and evaluate the successes of the AGYW initiative in reducing new infections.
arXiv Detail & Related papers (2021-08-22T18:22:17Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - They, Them, Theirs: Rewriting with Gender-Neutral English [56.14842450974887]
We perform a case study on the singular they, a common way to promote gender inclusion in English.
We show how a model can be trained to produce gender-neutral English with 1% word error rate with no human-labeled data.
arXiv Detail & Related papers (2021-02-12T21:47:48Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z) - A Machine Learning Application for Raising WASH Awareness in the Times
of COVID-19 Pandemic [6.076596440682804]
The COVID-19 pandemic has uncovered the potential of digital misinformation in shaping the health of nations.
We created WashKaro, a multi-pronged intervention for mitigating misinformation through conversational AI, machine translation and natural language processing.
WashKaro provides the right information matched against WHO guidelines through AI, and delivers it in the right format in local languages.
arXiv Detail & Related papers (2020-03-16T08:51:40Z) - #MeToo on Campus: Studying College Sexual Assault at Scale Using Data
Reported on Social Media [71.74529365205053]
We analyze the influence of the # trend on a pool of college followers.
The results show that the majority of topics embedded in those # tweets detail sexual harassment stories.
There exists a significant correlation between the prevalence of this trend and official reports on several major geographical regions.
arXiv Detail & Related papers (2020-01-16T18:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.