Can I Trust This Chatbot? Assessing User Privacy in AI-Healthcare Chatbot Applications
- URL: http://arxiv.org/abs/2509.14581v1
- Date: Thu, 18 Sep 2025 03:29:43 GMT
- Title: Can I Trust This Chatbot? Assessing User Privacy in AI-Healthcare Chatbot Applications
- Authors: Ramazan Yener, Guan-Hung Chen, Ece Gumusel, Masooda Bashir,
- Abstract summary: Our study evaluates the privacy practices of 12 widely downloaded AI healthcare chatbots apps available on the App Store and Google Play in the United States.<n>Half of the examined apps did not present a privacy policy during sign up, and only two provided an option to disable data sharing at that stage.<n>The majority of apps' privacy policies failed to address data protection measures.
- Score: 2.7026776927145235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As Conversational Artificial Intelligence (AI) becomes more integrated into everyday life, AI-powered chatbot mobile applications are increasingly adopted across industries, particularly in the healthcare domain. These chatbots offer accessible and 24/7 support, yet their collection and processing of sensitive health data present critical privacy concerns. While prior research has examined chatbot security, privacy issues specific to AI healthcare chatbots have received limited attention. Our study evaluates the privacy practices of 12 widely downloaded AI healthcare chatbot apps available on the App Store and Google Play in the United States. We conducted a three-step assessment analyzing: (1) privacy settings during sign-up, (2) in-app privacy controls, and (3) the content of privacy policies. The analysis identified significant gaps in user data protection. Our findings reveal that half of the examined apps did not present a privacy policy during sign up, and only two provided an option to disable data sharing at that stage. The majority of apps' privacy policies failed to address data protection measures. Moreover, users had minimal control over their personal data. The study provides key insights for information science researchers, developers, and policymakers to improve privacy protections in AI healthcare chatbot apps.
Related papers
- On the Security and Privacy of AI-based Mobile Health Chatbots [0.24554686192257424]
This study empirically assesses 16 AI-based mHealth chatbots identified from the Google Play Store.<n>Our findings revealed security vulnerabilities, privacy issues, and non-compliance with Google Play policies.<n>These recommendations focus on improving data handling processes, disclosure, and user security.
arXiv Detail & Related papers (2025-11-15T22:49:07Z) - Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI [1.325665193924634]
This paper takes a critical approach towards examining the intricacies of these issues within AI companion services.
We analyze articles from public media about the company and its practices to gain insight into the trustworthiness of information provided in the policy.
The results reveal despite privacy notices, data collection practices might harvest personal data without users' full awareness.
arXiv Detail & Related papers (2024-11-07T07:36:19Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.<n>We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.<n>State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory [43.12744258781724]
We formulate the privacy issue as a reasoning problem rather than simple pattern matching.<n>We develop the first comprehensive checklist that covers social identities, private attributes, and existing privacy regulations.
arXiv Detail & Related papers (2024-08-19T14:48:04Z) - NAP^2: A Benchmark for Naturalness and Privacy-Preserving Text Rewriting by Learning from Human [56.46355425175232]
We suggest sanitizing sensitive text using two common strategies used by humans.<n>We curate the first corpus, coined NAP2, through both crowdsourcing and the use of large language models.<n>Compared to the prior works on anonymization, the human-inspired approaches result in more natural rewrites.
arXiv Detail & Related papers (2024-06-06T05:07:44Z) - User Privacy Harms and Risks in Conversational AI: A Proposed Framework [1.8416014644193066]
This study identifies 9 privacy harms and 9 privacy risks in text-based interactions.
The aim is to offer developers, policymakers, and researchers a tool for responsible and secure implementation of conversational AI.
arXiv Detail & Related papers (2024-02-15T05:21:58Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - Privacy Explanations - A Means to End-User Trust [64.7066037969487]
We looked into how explainability might help to tackle this problem.
We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required.
Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems.
arXiv Detail & Related papers (2022-10-18T09:30:37Z) - Analysis of Longitudinal Changes in Privacy Behavior of Android
Applications [79.71330613821037]
In this paper, we examine the trends in how Android apps have changed over time with respect to privacy.
We examine the adoption of HTTPS, whether apps scan the device for other installed apps, the use of permissions for privacy-sensitive data, and the use of unique identifiers.
We find that privacy-related behavior has improved with time as apps continue to receive updates, and that the third-party libraries used by apps are responsible for more issues with privacy.
arXiv Detail & Related papers (2021-12-28T16:21:31Z) - Measuring the Effectiveness of Privacy Policies for Voice Assistant
Applications [12.150750035659383]
We conduct the first large-scale data analytics to systematically measure the effectiveness of privacy policies provided by voice-app developers.
We analyzed 64,720 Amazon Alexa skills and 2,201 Google Assistant actions.
Our findings reveal a worrisome reality of privacy policies in two mainstream voice-app stores.
arXiv Detail & Related papers (2020-07-29T03:17:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.