"Before, I Asked My Mom, Now I Ask ChatGPT": Visual Privacy Management with Generative AI for Blind and Low-Vision People
- URL: http://arxiv.org/abs/2507.00286v2
- Date: Sat, 19 Jul 2025 04:31:05 GMT
- Title: "Before, I Asked My Mom, Now I Ask ChatGPT": Visual Privacy Management with Generative AI for Blind and Low-Vision People
- Authors: Tanusree Sharma, Yu-Yun Tseng, Lotus Zhang, Ayae Ide, Kelly Avery Mack, Leah Findlater, Danna Gurari, Yang Wang,
- Abstract summary: We investigate the current practices and future design preferences of blind and low vision individuals through an interview study.<n>Our findings reveal a range of current practices with GenAI that balance privacy, efficiency, and emotional agency.<n>We conclude with actionable design recommendations to support user-centered visual privacy through GenAI.
- Score: 22.414052622770132
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blind and low vision (BLV) individuals use Generative AI (GenAI) tools to interpret and manage visual content in their daily lives. While such tools can enhance the accessibility of visual content and so enable greater user independence, they also introduce complex challenges around visual privacy. In this paper, we investigate the current practices and future design preferences of blind and low vision individuals through an interview study with 21 participants. Our findings reveal a range of current practices with GenAI that balance privacy, efficiency, and emotional agency, with users accounting for privacy risks across six key scenarios, such as self-presentation, indoor/outdoor spatial privacy, social sharing, and handling professional content. Our findings reveal design preferences, including on-device processing, zero-retention guarantees, sensitive content redaction, privacy-aware appearance indicators, and multimodal tactile mirrored interaction methods. We conclude with actionable design recommendations to support user-centered visual privacy through GenAI, expanding the notion of privacy and responsible handling of others data.
Related papers
- Understanding Users' Security and Privacy Concerns and Attitudes Towards Conversational AI Platforms [3.789219860006095]
We conduct a large-scale analysis of over 2.5M user posts from the r/ChatGPT Reddit community to understand users' security and privacy concerns.<n>We find that users are concerned about each stage of the data lifecycle (i.e., collection, usage, and retention)<n>We provide recommendations for users, platforms, enterprises, and policymakers to enhance transparency, improve data controls, and increase user trust and adoption.
arXiv Detail & Related papers (2025-04-09T03:22:48Z) - Multi-P$^2$A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models [65.2761254581209]
We evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source Large Vision-Language Models (LVLMs)<n>Based on Multi-P$2$A, we evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source LVLMs.<n>Our results reveal that current LVLMs generally pose a high risk of facilitating privacy breaches.
arXiv Detail & Related papers (2024-12-27T07:33:39Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.<n>We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.<n>We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Ethical Challenges in Computer Vision: Ensuring Privacy and Mitigating Bias in Publicly Available Datasets [0.0]
This paper aims to shed light on the ethical problems of creating and deploying computer vision tech.
Computer vision has become a vital tool in many industries, including medical care, security systems, and trade.
arXiv Detail & Related papers (2024-08-31T00:59:29Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Understanding How to Inform Blind and Low-Vision Users about Data Privacy through Privacy Question Answering Assistants [23.94659412932831]
Blind and low-vision (BLV) users face heightened security and privacy risks, but their risk mitigation is often insufficient.
Our study sheds light on BLV users' expectations when it comes to usability, accessibility, trust and equity issues regarding digital data privacy.
arXiv Detail & Related papers (2023-10-12T19:51:31Z) - Privacy Preservation in Artificial Intelligence and Extended Reality
(AI-XR) Metaverses: A Survey [3.0151762748441624]
The metaverse envisions a virtual universe where individuals can interact, create, and participate in a wide range of activities.
Privacy in the metaverse is a critical concern as the concept evolves and immersive virtual experiences become more prevalent.
We explore various privacy challenges that future metaverses are expected to face, given their reliance on AI for tracking users.
arXiv Detail & Related papers (2023-09-19T11:56:12Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Eye-tracked Virtual Reality: A Comprehensive Survey on Methods and
Privacy Challenges [33.50215933003216]
This survey focuses on eye tracking in virtual reality (VR) and the privacy implications of those possibilities.
We first cover major works in eye tracking, VR, and privacy areas between the years 2012 and 2022.
We focus on eye-based authentication as well as computational methods to preserve the privacy of individuals and their eye-tracking data in VR.
arXiv Detail & Related papers (2023-05-23T14:02:38Z) - Hiding Visual Information via Obfuscating Adversarial Perturbations [47.315523613407244]
We propose an adversarial visual information hiding method to protect the visual privacy of data.
Specifically, the method generates obfuscating adversarial perturbations to obscure the visual information of the data.
Experimental results on the recognition and classification tasks demonstrate that the proposed method can effectively hide visual information.
arXiv Detail & Related papers (2022-09-30T08:23:26Z) - Exploring and Improving the Accessibility of Data Privacy-related
Information for People Who Are Blind or Low-vision [22.66113008033347]
We present a study of privacy attitudes and behaviors of people who are blind or low vision.
Our study involved in-depth interviews with 21 US participants.
One objective of the study is to better understand this user group's needs for more accessible privacy tools.
arXiv Detail & Related papers (2022-08-21T20:54:40Z) - Privacy-preserving Graph Analytics: Secure Generation and Federated
Learning [72.90158604032194]
We focus on the privacy-preserving analysis of graph data, which provides the crucial capacity to represent rich attributes and relationships.
We discuss two directions, namely privacy-preserving graph generation and federated graph learning, which can jointly enable the collaboration among multiple parties each possessing private graph data.
arXiv Detail & Related papers (2022-06-30T18:26:57Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z) - I-ViSE: Interactive Video Surveillance as an Edge Service using
Unsupervised Feature Queries [70.69741666849046]
This paper proposes an Interactive Video Surveillance as an Edge service (I-ViSE) based on unsupervised feature queries.
An I-ViSE prototype is built following the edge-fog computing paradigm and the experimental results verified the I-ViSE scheme meets the design goal of scene recognition in less than two seconds.
arXiv Detail & Related papers (2020-03-09T14:26:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.