User Privacy Harms and Risks in Conversational AI: A Proposed Framework
- URL: http://arxiv.org/abs/2402.09716v1
- Date: Thu, 15 Feb 2024 05:21:58 GMT
- Title: User Privacy Harms and Risks in Conversational AI: A Proposed Framework
- Authors: Ece Gumusel, Kyrie Zhixuan Zhou, Madelyn Rose Sanfilippo
- Abstract summary: This study identifies 9 privacy harms and 9 privacy risks in text-based interactions.
The aim is to offer developers, policymakers, and researchers a tool for responsible and secure implementation of conversational AI.
- Score: 1.8416014644193066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study presents a unique framework that applies and extends Solove
(2006)'s taxonomy to address privacy concerns in interactions with text-based
AI chatbots. As chatbot prevalence grows, concerns about user privacy have
heightened. While existing literature highlights design elements compromising
privacy, a comprehensive framework is lacking. Through semi-structured
interviews with 13 participants interacting with two AI chatbots, this study
identifies 9 privacy harms and 9 privacy risks in text-based interactions.
Using a grounded theory approach for interview and chatlog analysis, the
framework examines privacy implications at various interaction stages. The aim
is to offer developers, policymakers, and researchers a tool for responsible
and secure implementation of conversational AI, filling the existing gap in
addressing privacy issues associated with text-based AI chatbots.
Related papers
- Trust No Bot: Discovering Personal Disclosures in Human-LLM Conversations in the Wild [40.57348900292574]
Measuring personal disclosures made in human-chatbot interactions can provide a better understanding of users' AI literacy.
We run an extensive, fine-grained analysis on the personal disclosures made by real users to commercial GPT models.
arXiv Detail & Related papers (2024-07-16T07:05:31Z) - ProxyGPT: Enabling Anonymous Queries in AI Chatbots with (Un)Trustworthy Browser Proxies [12.552035175341894]
We present ProxyGPT, a privacy-enhancing system that enables anonymous queries in popular chatbots platforms.
The system is designed to support key security properties such as content integrity via TLS-backed data provenance, end-to-end encryption, and anonymous payment.
Our human evaluation shows that ProxyGPT offers users a greater sense of privacy compared to traditional AI chatbots.
arXiv Detail & Related papers (2024-07-11T18:08:04Z) - Embedding Privacy in Computational Social Science and Artificial Intelligence Research [2.048226951354646]
Preserving privacy has emerged as a critical factor in research.
The increasing use of advanced computational models stands to exacerbate privacy concerns.
This article contributes to the field by discussing the role of privacy and the issues that researchers working in CSS, AI, data science and related domains are likely to face.
arXiv Detail & Related papers (2024-04-17T16:07:53Z) - Dialogue with Robots: Proposals for Broadening Participation and Research in the SLIVAR Community [57.56212633174706]
The ability to interact with machines using natural human language is becoming commonplace, but expected.
In this paper, we chronicle the recent history of this growing field of spoken dialogue with robots.
We offer the community three proposals, the first focused on education, the second on benchmarks, and the third on the modeling of language when it comes to spoken interaction with robots.
arXiv Detail & Related papers (2024-04-01T15:03:27Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Comparing How a Chatbot References User Utterances from Previous
Chatting Sessions: An Investigation of Users' Privacy Concerns and
Perceptions [9.205084145168884]
We investigated the trade-off between remembering and referencing previous conversations and user engagement.
We found that referencing previous utterances does not enhance user engagement or infringe on privacy.
We discuss implications from our findings that can help designers choose an appropriate format to reference previous user utterances.
arXiv Detail & Related papers (2023-08-09T11:21:51Z) - Interactive Conversational Head Generation [68.76774230274076]
We introduce a new conversation head generation benchmark for synthesizing behaviors of a single interlocutor in a face-to-face conversation.
The capability to automatically synthesize interlocutors which can participate in long and multi-turn conversations is vital and offer benefits for various applications.
arXiv Detail & Related papers (2023-07-05T08:06:26Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - FedBot: Enhancing Privacy in Chatbots with Federated Learning [0.0]
Federated Learning (FL) aims to protect data privacy through distributed learning methods that keep the data in its location.
The POC combines Deep Bidirectional Transformer models and federated learning algorithms to protect customer data privacy during collaborative model training.
The system is specifically designed to improve its performance and accuracy over time by leveraging its ability to learn from previous interactions.
arXiv Detail & Related papers (2023-04-04T23:13:52Z) - CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset
for Conversational AI [48.67259855309959]
Most existing datasets for conversational AI ignore human personalities and emotions.
We propose CPED, a large-scale Chinese personalized and emotional dialogue dataset.
CPED contains more than 12K dialogues of 392 speakers from 40 TV shows.
arXiv Detail & Related papers (2022-05-29T17:45:12Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.