Decoding User Concerns in AI Health Chatbots: An Exploration of Security and Privacy in App Reviews
- URL: http://arxiv.org/abs/2502.00067v1
- Date: Fri, 31 Jan 2025 00:38:37 GMT
- Title: Decoding User Concerns in AI Health Chatbots: An Exploration of Security and Privacy in App Reviews
- Authors: Muhammad Hassan, Abdullah Ghani, Muhammad Fareed Zaffar, Masooda Bashir,
- Abstract summary: This study evaluates the effectiveness of automated methods, specifically BART and Gemini GenAI, in identifying security privacy related (SPR) concerns.
Our results indicate that while Gemini's performance in SPR classification is comparable to manual labeling, both automated methods have limitations.
- Score: 1.2437039433843042
- License:
- Abstract: AI powered health chatbot applications are increasingly utilized for personalized healthcare services, yet they pose significant challenges related to user data security and privacy. This study evaluates the effectiveness of automated methods, specifically BART and Gemini GenAI, in identifying security privacy related (SPR) concerns within these applications' user reviews, benchmarking their performance against manual qualitative analysis. Our results indicate that while Gemini's performance in SPR classification is comparable to manual labeling, both automated methods have limitations, including the misclassification of unrelated issues. Qualitative analysis revealed critical user concerns, such as data collection practices, data misuse, and insufficient transparency and consent mechanisms. This research enhances the understanding of the relationship between user trust, privacy, and emerging mobile AI health chatbot technologies, offering actionable insights for improving security and privacy practices in AI driven health chatbots. Although exploratory, our findings highlight the necessity for rigorous audits and transparent communication strategies, providing valuable guidance for app developers and vendors in addressing user security and privacy concerns.
Related papers
- Towards Privacy-aware Mental Health AI Models: Advances, Challenges, and Opportunities [61.633126163190724]
Mental illness is a widespread and debilitating condition with substantial societal and personal costs.
Recent advances in Artificial Intelligence (AI) hold great potential for recognizing and addressing conditions such as depression, anxiety disorder, bipolar disorder, schizophrenia, and post-traumatic stress disorder.
Privacy concerns, including the risk of sensitive data leakage from datasets and trained models, remain a critical barrier to deploying these AI systems in real-world clinical settings.
arXiv Detail & Related papers (2025-02-01T15:10:02Z) - Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives [0.0]
This study explores stakeholder perspectives on privacy in AI systems, focusing on educators, parents, and AI professionals.
Using qualitative analysis of survey responses from 227 participants, the research identifies key privacy risks, including data breaches, ethical misuse, and excessive data collection.
The findings provide actionable insights into balancing the benefits of AI with robust privacy protections.
arXiv Detail & Related papers (2025-01-23T02:06:25Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Implications of Artificial Intelligence on Health Data Privacy and Confidentiality [0.0]
The rapid integration of artificial intelligence in healthcare is revolutionizing medical diagnostics, personalized medicine, and operational efficiency.
However, significant challenges arise concerning patient data privacy, ethical considerations, and regulatory compliance.
This paper examines the dual impact of AI on healthcare, highlighting its transformative potential and the critical need for safeguarding sensitive health information.
arXiv Detail & Related papers (2025-01-03T05:17:23Z) - Navigating AI to Unpack Youth Privacy Concerns: An In-Depth Exploration and Systematic Review [0.0]
This systematic literature review investigates perceptions, concerns, and expectations of young digital citizens regarding privacy in artificial intelligence (AI) systems.
Data extraction focused on privacy concerns, data-sharing practices, the balance between privacy and utility, trust factors in AI, and strategies to enhance user control over personal data.
Findings reveal significant privacy concerns among young users, including a perceived lack of control over personal information, potential misuse of data by AI, and fears of data breaches and unauthorized access.
arXiv Detail & Related papers (2024-12-20T22:00:06Z) - Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI [1.325665193924634]
This paper takes a critical approach towards examining the intricacies of these issues within AI companion services.
We analyze articles from public media about the company and its practices to gain insight into the trustworthiness of information provided in the policy.
The results reveal despite privacy notices, data collection practices might harvest personal data without users' full awareness.
arXiv Detail & Related papers (2024-11-07T07:36:19Z) - Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild [40.57348900292574]
Measuring personal disclosures made in human-chatbot interactions can provide a better understanding of users' AI literacy.
We run an extensive, fine-grained analysis on the personal disclosures made by real users to commercial GPT models.
arXiv Detail & Related papers (2024-07-16T07:05:31Z) - Collection, usage and privacy of mobility data in the enterprise and public administrations [55.2480439325792]
Security measures such as anonymization are needed to protect individuals' privacy.
Within our study, we conducted expert interviews to gain insights into practices in the field.
We survey privacy-enhancing methods in use, which generally do not comply with state-of-the-art standards of differential privacy.
arXiv Detail & Related papers (2024-07-04T08:29:27Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Privacy-preserving medical image analysis [53.4844489668116]
We present PriMIA, a software framework designed for privacy-preserving machine learning (PPML) in medical imaging.
We show significantly better classification performance of a securely aggregated federated learning model compared to human experts on unseen datasets.
We empirically evaluate the framework's security against a gradient-based model inversion attack.
arXiv Detail & Related papers (2020-12-10T13:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.