The Phantom Menace: Unmasking Privacy Leakages in Vision-Language Models
- URL: http://arxiv.org/abs/2408.01228v2
- Date: Mon, 19 Aug 2024 13:35:05 GMT
- Title: The Phantom Menace: Unmasking Privacy Leakages in Vision-Language Models
- Authors: Simone Caldarella, Massimiliano Mancini, Elisa Ricci, Rahaf Aljundi,
- Abstract summary: Vision-Language Models (VLMs) combine visual and textual understanding, rendering them well-suited for diverse tasks.
These capabilities are built upon training on large amount of uncurated data crawled from the web.
In this paper, we assess whether these vulnerabilities exist, focusing on identity leakage.
- Score: 31.166994121531232
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Vision-Language Models (VLMs) combine visual and textual understanding, rendering them well-suited for diverse tasks like generating image captions and answering visual questions across various domains. However, these capabilities are built upon training on large amount of uncurated data crawled from the web. The latter may include sensitive information that VLMs could memorize and leak, raising significant privacy concerns. In this paper, we assess whether these vulnerabilities exist, focusing on identity leakage. Our study leads to three key findings: (i) VLMs leak identity information, even when the vision-language alignment and the fine-tuning use anonymized data; (ii) context has little influence on identity leakage; (iii) simple, widely used anonymization techniques, like blurring, are not sufficient to address the problem. These findings underscore the urgent need for robust privacy protection strategies when deploying VLMs. Ethical awareness and responsible development practices are essential to mitigate these risks.
Related papers
- VIP: Visual Information Protection through Adversarial Attacks on Vision-Language Models [15.158545794377169]
We frame the preservation of privacy in Vision-Language Models as an adversarial attack problem.<n>We propose a novel attack strategy that selectively conceals information within designated Region Of Interests in an image.<n> Experimental results across three state-of-the-art VLMs demonstrate up to 98% reduction in detecting targeted ROIs.
arXiv Detail & Related papers (2025-07-11T19:34:01Z) - SoK: Semantic Privacy in Large Language Models [24.99241770349404]
This paper introduces a lifecycle-centric framework to analyze semantic privacy risks across input processing, pretraining, fine-tuning, and alignment stages of Large Language Models (LLMs)<n>We categorize key attack vectors and assess how current defenses, such as differential privacy, embedding encryption, edge computing, and unlearning, address these threats.<n>We conclude by outlining open challenges, including quantifying semantic leakage, protecting multimodal inputs, balancing de-identification with generation quality, and ensuring transparency in privacy enforcement.
arXiv Detail & Related papers (2025-06-30T08:08:15Z) - AgentStealth: Reinforcing Large Language Model for Anonymizing User-generated Text [8.758843436588297]
AgentStealth is a self-reinforcing language model for text anonymization.<n>We show that our method outperforms baselines in both anonymization effectiveness and utility.<n>Our lightweight design supports direct deployment on edge devices, avoiding cloud reliance and communication-based privacy risks.
arXiv Detail & Related papers (2025-06-26T02:48:16Z) - Transferable Adversarial Attacks on Black-Box Vision-Language Models [63.22532779621001]
adversarial attacks can transfer from open-source to proprietary black-box models in text-only and vision-only contexts.<n>We show that attackers can craft perturbations to induce specific attacker-chosen interpretations of visual information.<n>We discover that universal perturbations -- modifications applicable to a wide set of images -- can consistently induce these misinterpretations.
arXiv Detail & Related papers (2025-05-02T06:51:11Z) - Unlearning Sensitive Information in Multimodal LLMs: Benchmark and Attack-Defense Evaluation [88.78166077081912]
We introduce a multimodal unlearning benchmark, UnLOK-VQA, and an attack-and-defense framework to evaluate methods for deleting specific multimodal knowledge from MLLMs.<n>Our results show multimodal attacks outperform text- or image-only ones, and that the most effective defense removes answer information from internal model states.
arXiv Detail & Related papers (2025-05-01T01:54:00Z) - When Data Manipulation Meets Attack Goals: An In-depth Survey of Attacks for VLMs [15.74045364570382]
We present an in-depth survey of the attack strategies tailored for Vision-Language Models (VLMs)
We categorize these attacks based on their underlying objectives.
We outline corresponding defense mechanisms that have been proposed to mitigate these vulnerabilities.
arXiv Detail & Related papers (2025-02-10T12:20:08Z) - Secure Visual Data Processing via Federated Learning [2.4374097382908477]
This paper addresses the need for privacy-preserving solutions in large-scale visual data processing.
We propose a new approach that combines object detection, federated learning and anonymization.
Our solution is evaluated against traditional centralized models, showing that while there is a slight trade-off in accuracy, the privacy benefits are substantial.
arXiv Detail & Related papers (2025-02-09T09:44:18Z) - Benchmarking Vision Language Model Unlearning via Fictitious Facial Identity Dataset [94.13848736705575]
We introduce Facial Identity Unlearning Benchmark (FIUBench), a novel VLM unlearning benchmark designed to robustly evaluate the effectiveness of unlearning algorithms.
We apply a two-stage evaluation pipeline that is designed to precisely control the sources of information and their exposure levels.
Through the evaluation of four baseline VLM unlearning algorithms within FIUBench, we find that all methods remain limited in their unlearning performance.
arXiv Detail & Related papers (2024-11-05T23:26:10Z) - VLMGuard: Defending VLMs against Malicious Prompts via Unlabeled Data [29.806775884883685]
VLMGuard is a novel learning framework that leverages the unlabeled user prompts in the wild for malicious prompt detection.
We present an automated maliciousness estimation score for distinguishing between benign and malicious samples.
Our framework does not require extra human annotations, offering strong flexibility and practicality for real-world applications.
arXiv Detail & Related papers (2024-10-01T00:37:29Z) - Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions [12.451936012379319]
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, finding applications across various domains.
Their reliance on massive internet-sourced datasets for training brings notable privacy issues.
Certain application-specific scenarios may require fine-tuning these models on private data.
arXiv Detail & Related papers (2024-08-10T05:41:19Z) - Robust Utility-Preserving Text Anonymization Based on Large Language Models [80.5266278002083]
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
Existing techniques face the emerging challenges of re-identification attack ability of Large Language Models.
This paper proposes a framework composed of three LLM-based components -- a privacy evaluator, a utility evaluator, and an optimization component.
arXiv Detail & Related papers (2024-07-16T14:28:56Z) - Exploring the Privacy Protection Capabilities of Chinese Large Language Models [19.12726985060863]
We have devised a three-tiered progressive framework for evaluating privacy in language systems.
Our primary objective is to comprehensively evaluate the sensitivity of large language models to private information.
Our observations indicate that existing Chinese large language models universally show privacy protection shortcomings.
arXiv Detail & Related papers (2024-03-27T02:31:54Z) - HFORD: High-Fidelity and Occlusion-Robust De-identification for Face
Privacy Protection [60.63915939982923]
Face de-identification is a practical way to solve the identity protection problem.
The existing facial de-identification methods have revealed several problems.
We present a High-Fidelity and Occlusion-Robust De-identification (HFORD) method to deal with these issues.
arXiv Detail & Related papers (2023-11-15T08:59:02Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Privacy in Large Language Models: Attacks, Defenses and Future Directions [84.73301039987128]
We analyze the current privacy attacks targeting large language models (LLMs) and categorize them according to the adversary's assumed capabilities.
We present a detailed overview of prominent defense strategies that have been developed to counter these privacy attacks.
arXiv Detail & Related papers (2023-10-16T13:23:54Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Context-Aware Differential Privacy for Language Modeling [41.54238543400462]
This paper introduces Context-Aware Differentially Private Language Model (CADP-LM)
CADP-LM relies on the notion of emphcontext to define and audit the potentially sensitive information.
A unique characteristic of CADP-LM is its ability to target the protection of sensitive sentences and contexts only.
arXiv Detail & Related papers (2023-01-28T20:06:16Z) - Privacy in Deep Learning: A Survey [16.278779275923448]
The ever-growing advances of deep learning in many areas have led to the adoption of Deep Neural Networks (DNNs) in production systems.
The availability of large datasets and high computational power are the main contributors to these advances.
This poses serious privacy concerns as this data can be misused or leaked through various vulnerabilities.
arXiv Detail & Related papers (2020-04-25T23:47:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.