Beyond Blanket Masking: Examining Granularity for Privacy Protection in Images Captured by Blind and Low Vision Users
- URL: http://arxiv.org/abs/2508.09245v1
- Date: Tue, 12 Aug 2025 17:56:36 GMT
- Title: Beyond Blanket Masking: Examining Granularity for Privacy Protection in Images Captured by Blind and Low Vision Users
- Authors: Jeffri Murrugarra-LLerena, Haoran Niu, K. Suzanne Barber, Hal Daumé III, Yang Trista Cao, Paola Cascante-Bonilla,
- Abstract summary: We propose FiGPriv, a fine-grained privacy protection framework that selectively masks only high-risk private information.<n>Our approach integrates fine-grained segmentation with a data-driven risk scoring mechanism.<n>We evaluate our framework using the BIV-Priv-Seg dataset and show that FiG-Priv preserves +26% of image content.
- Score: 23.61740342584077
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As visual assistant systems powered by visual language models (VLMs) become more prevalent, concerns over user privacy have grown, particularly for blind and low vision users who may unknowingly capture personal private information in their images. Existing privacy protection methods rely on coarse-grained segmentation, which uniformly masks entire private objects, often at the cost of usability. In this work, we propose FiGPriv, a fine-grained privacy protection framework that selectively masks only high-risk private information while preserving low-risk information. Our approach integrates fine-grained segmentation with a data-driven risk scoring mechanism. We evaluate our framework using the BIV-Priv-Seg dataset and show that FiG-Priv preserves +26% of image content, enhancing the ability of VLMs to provide useful responses by 11% and identify the image content by 45%, while ensuring privacy protection. Project Page: https://artcs1.github.io/VLMPrivacy/
Related papers
- Who Can See Through You? Adversarial Shielding Against VLM-Based Attribute Inference Attacks [13.326888254423901]
VLM-based attribute inference attacks have emerged as a serious privacy concern, enabling adversaries to infer private attributes from images shared on social media.<n>We propose a novel protection method that jointly optimize privacy suppression and utility preservation under a visual consistency constraint.<n>Our method effectively reduces PAR below 25%, keeps NPAR above 88%, and generalizes well to unseen and paraphrased privacy questions.
arXiv Detail & Related papers (2025-12-20T08:08:50Z) - Privacy Blur: Quantifying Privacy and Utility for Image Data Release [48.64095568151945]
We show that practical implementations of Gaussian blurring are reversible enough to break privacy.<n>We take a closer look at the privacy-utility tradeoffs offered by three other obfuscation algorithms.<n> pixelization and noise addition offer both privacy and utility for a number of computer vision tasks.
arXiv Detail & Related papers (2025-12-18T02:01:17Z) - Privacy-Preserving in Connected and Autonomous Vehicles Through Vision to Text Transformation [0.9831489366502302]
This paper introduces a novel privacy-preserving framework that leverages feedback-based reinforcement learning (RL) and vision-language models (VLMs)<n>The main idea is to convert images into semantically equivalent textual descriptions, ensuring that scene-relevant information is retained while visual privacy is preserved.<n> Evaluation results demonstrate significant improvements in both privacy protection and textual quality.
arXiv Detail & Related papers (2025-06-18T20:02:24Z) - Multi-P$^2$A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models [65.2761254581209]
We evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source Large Vision-Language Models (LVLMs)<n>Based on Multi-P$2$A, we evaluate the privacy preservation capabilities of 21 open-source and 2 closed-source LVLMs.<n>Our results reveal that current LVLMs generally pose a high risk of facilitating privacy breaches.
arXiv Detail & Related papers (2024-12-27T07:33:39Z) - Image Privacy Protection: A Survey [32.020322218775526]
Images serve as a crucial medium for communication, presenting information in a visually engaging format that facilitates rapid comprehension of key points.<n>If not managed properly, this information may be vulnerable to exploitation for personal gain, potentially infringing on privacy rights and other legal entitlements.<n>Existing reviews tend to categorize either by specific scenarios, or by specific privacy objectives.
arXiv Detail & Related papers (2024-12-05T08:09:25Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.<n>We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.<n>We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - BIV-Priv-Seg: Locating Private Content in Images Taken by People With Visual Impairments [25.365045519494874]
BIV-Priv-Seg is the first dataset originating from people with visual impairments that shows private content.<n>It contains 1,028 images with segmentation annotations for 16 private object categories.<n>We evaluate modern models' performance for locating private content in the dataset.
arXiv Detail & Related papers (2024-07-25T17:57:48Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - LDP-Feat: Image Features with Local Differential Privacy [10.306943706927006]
We propose two novel inversion attacks to show that it is possible to recover the original image features from embeddings.
We propose the first method to privatize image features via local differential privacy, which, unlike prior approaches, provides a guaranteed bound for privacy leakage.
arXiv Detail & Related papers (2023-08-22T06:28:55Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.