Beyond Pixels: Semantic-aware Typographic Attack for Geo-Privacy Protection
- URL: http://arxiv.org/abs/2511.12575v1
- Date: Sun, 16 Nov 2025 12:27:59 GMT
- Title: Beyond Pixels: Semantic-aware Typographic Attack for Geo-Privacy Protection
- Authors: Jiayi Zhu, Yihao Huang, Yue Cao, Xiaojun Jia, Qing Guo, Felix Juefei-Xu, Geguang Pu, Bin Wang,
- Abstract summary: Large Visual Language Models (LVLMs) infer a social media user's geolocation directly from shared images, leading to unintended privacy leakage.<n> adversarial image perturbations provide a potential direction for geo-privacy protection, but require relatively strong distortions to be effective against LVLMs.<n>We identify deceptive attacks as a promising direction for protecting geo-privacy by adding text extension outside the visual content.
- Score: 43.65944873827891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Visual Language Models (LVLMs) now pose a serious yet overlooked privacy threat, as they can infer a social media user's geolocation directly from shared images, leading to unintended privacy leakage. While adversarial image perturbations provide a potential direction for geo-privacy protection, they require relatively strong distortions to be effective against LVLMs, which noticeably degrade visual quality and diminish an image's value for sharing. To overcome this limitation, we identify typographical attacks as a promising direction for protecting geo-privacy by adding text extension outside the visual content. We further investigate which textual semantics are effective in disrupting geolocation inference and design a two-stage, semantics-aware typographical attack that generates deceptive text to protect user privacy. Extensive experiments across three datasets demonstrate that our approach significantly reduces geolocation prediction accuracy of five state-of-the-art commercial LVLMs, establishing a practical and visually-preserving protection strategy against emerging geo-privacy threats.
Related papers
- Do Vision-Language Models Respect Contextual Integrity in Location Disclosure? [35.91273000038155]
Vision-language models (VLMs) have demonstrated strong performance in image geolocation.<n>This poses a significant privacy risk as they can be exploited to infer sensitive locations from casually shared photos.<n>We introduce VLM-GEOPRIVACY, a benchmark that challenges VLMs to interpret latent social norms and contextual cues in real-world images.
arXiv Detail & Related papers (2026-02-04T20:24:14Z) - Personalized 3D Spatiotemporal Trajectory Privacy Protection with Differential and Distortion Geo-Perturbation [64.60694805725727]
This paper proposes a personalized 3Dtemporal trajectory privacy protection mechanism named 3DSTPM.<n>We analyze the characteristics of attackers that exploit correlations between locations in a trajectory and present the attack model.<n>Results demonstrate that the proposed 3DSTPM effectively reduces loss while meeting the user's personalized privacy protection needs.
arXiv Detail & Related papers (2025-11-27T07:41:14Z) - The Double-edged Sword of LLM-based Data Reconstruction: Understanding and Mitigating Contextual Vulnerability in Word-level Differential Privacy Text Sanitization [53.51921540246166]
We show that Language Large Models (LLMs) can exploit the contextual vulnerability of DP-sanitized texts.<n>Experiments uncover a double-edged sword effect of LLM reconstructions on privacy and utility.<n>We propose recommendations for using data reconstruction as a post-processing step.
arXiv Detail & Related papers (2025-08-26T12:22:45Z) - GeoShield: Safeguarding Geolocation Privacy from Vision-Language Models via Adversarial Perturbations [48.78781663571235]
Vision-Language Models (VLMs) can infer users' locations from public shared images, posing a substantial risk to geoprivacy.<n>We propose GeoShield, a novel adversarial framework designed for robust geoprivacy protection in real-world scenarios.
arXiv Detail & Related papers (2025-08-05T08:37:06Z) - SoK: Semantic Privacy in Large Language Models [24.99241770349404]
This paper introduces a lifecycle-centric framework to analyze semantic privacy risks across input processing, pretraining, fine-tuning, and alignment stages of Large Language Models (LLMs)<n>We categorize key attack vectors and assess how current defenses, such as differential privacy, embedding encryption, edge computing, and unlearning, address these threats.<n>We conclude by outlining open challenges, including quantifying semantic leakage, protecting multimodal inputs, balancing de-identification with generation quality, and ensuring transparency in privacy enforcement.
arXiv Detail & Related papers (2025-06-30T08:08:15Z) - Privacy-Preserving in Connected and Autonomous Vehicles Through Vision to Text Transformation [0.9831489366502302]
This paper introduces a novel privacy-preserving framework that leverages feedback-based reinforcement learning (RL) and vision-language models (VLMs)<n>The main idea is to convert images into semantically equivalent textual descriptions, ensuring that scene-relevant information is retained while visual privacy is preserved.<n> Evaluation results demonstrate significant improvements in both privacy protection and textual quality.
arXiv Detail & Related papers (2025-06-18T20:02:24Z) - Doxing via the Lens: Revealing Location-related Privacy Leakage on Multi-modal Large Reasoning Models [37.18986847375693]
Adversaries can infer sensitive geolocation information from user-generated images.<n>DoxBench is a curated dataset of 500 real-world images reflecting diverse privacy scenarios.<n>Our findings highlight the urgent need to reassess inference-time privacy risks in MLRMs.
arXiv Detail & Related papers (2025-04-27T22:26:45Z) - PersGuard: Preventing Malicious Personalization via Backdoor Attacks on Pre-trained Text-to-Image Diffusion Models [51.458089902581456]
We introduce PersGuard, a novel backdoor-based approach that prevents malicious personalization of specific images.<n>Our method significantly outperforms existing techniques, offering a more robust solution for privacy and copyright protection.
arXiv Detail & Related papers (2025-02-22T09:47:55Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.