From Defense to Advocacy: Empowering Users to Leverage the Blind Spot of AI Inference
- URL: http://arxiv.org/abs/2601.11817v1
- Date: Fri, 16 Jan 2026 22:42:27 GMT
- Title: From Defense to Advocacy: Empowering Users to Leverage the Blind Spot of AI Inference
- Authors: Yumou Wei, John Carney, John Stamper, Nancy Belmont,
- Abstract summary: Most privacy regulations function as a passive defensive shield that users must wield themselves.<n>As organizations increasingly use AI to make inferences, the rapid expansion of Blind Self emerges as a critical challenge.<n>Building on the theory of Contextual Integrity, we propose a paradigm shift from defensive privacy management to proactive privacy advocacy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most privacy regulations function as a passive defensive shield that users must wield themselves. Users are incessantly asked to "opt-in" or "opt-out" of data collection, forced to make defensive decisions whose consequences are increasingly difficult to predict. Viewed through the Johari Window, a psychological framework of self-awareness based on what is known and unknown to self and others, current policies require users to manage the Open Self and shield the Hidden Self through notice and consent. However, as organizations increasingly use AI to make inferences, the rapid expansion of Blind Self, attributes known to algorithms but unknown to the user, emerges as a critical challenge. We illustrate how current regulations fall short because they focus on data collection rather than inference and leave this blind spot unguarded. Building on the theory of Contextual Integrity, we propose a paradigm shift from defensive privacy management to proactive privacy advocacy. We argue for the necessity of personal advocacy agents capable of operationalizing social norms to harness the power of AI inference. By illuminating the hidden inferences that users can strategically leverage or suppress, these agents not only restrain the growth of Blind Self but also mine it for value. By transforming the Unknown Self into a personal asset for users, we can foster a flow of personal information that is equitable, transparent, and individually beneficial in the age of AI.
Related papers
- Contextualized Privacy Defense for LLM Agents [84.30907378390512]
LLM agents increasingly act on users' personal information, yet existing privacy defenses remain limited in both design and adaptability.<n>We propose Contextualized Defense Instructing (CDI), a new privacy defense paradigm.<n>We show that our CDI consistently achieves a better balance between privacy preservation (94.2%) and helpfulness (80.6%) than baselines.
arXiv Detail & Related papers (2026-03-03T13:35:33Z) - Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy [50.66105844449181]
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice.<n>We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms.<n>We propose $(varepsilon_i,_i,overline)$-iDP a privacy contract that uses $$-divergences to provide users with a hard upper bound on their excess vulnerability.
arXiv Detail & Related papers (2026-01-19T10:26:12Z) - VoxGuard: Evaluating User and Attribute Privacy in Speech via Membership Inference Attacks [51.68795949691009]
We introduce VoxGuard, a framework grounded in differential privacy and membership inference.<n>For attributes, we show that simple transparent attacks recover gender and accent with near-perfect accuracy even after anonymization.<n>Our results demonstrate that EER substantially underestimates leakage, highlighting the need for low-FPR evaluation.
arXiv Detail & Related papers (2025-09-22T20:57:48Z) - Guarding Your Conversations: Privacy Gatekeepers for Secure Interactions with Cloud-Based AI Models [0.34998703934432673]
We propose the concept of an "LLM gatekeeper", a lightweight, locally run model that filters out sensitive information from user queries before they are sent to the potentially untrustworthy, though highly capable, cloud-based LLM.<n>Through experiments with human subjects, we demonstrate that this dual-model approach introduces minimal overhead while significantly enhancing user privacy, without compromising the quality of LLM responses.
arXiv Detail & Related papers (2025-08-22T19:49:03Z) - Countering Privacy Nihilism [2.6212127510234797]
AI may be presumed capable of inferring "everything from everything"<n>Discarding data categories as a normative anchoring in privacy and data protection is what we call privacy nihilism.<n>We propose moving away from privacy frameworks that focus solely on data type, neglecting all other factors.
arXiv Detail & Related papers (2025-07-24T09:52:18Z) - Protecting Users From Themselves: Safeguarding Contextual Privacy in Interactions with Conversational Agents [33.26308626066122]
We characterize the notion of contextual privacy for user interactions with Conversational Agents (LCAs)<n>It aims to minimize privacy risks by ensuring that users (sender) disclose only information that is both relevant and necessary for achieving their intended goals.<n>We propose a locally deployable framework that operates between users and LCAs, identifying and reformulating out-of-context information in user prompts.
arXiv Detail & Related papers (2025-02-22T09:05:39Z) - Rescriber: Smaller-LLM-Powered User-Led Data Minimization for LLM-Based Chatbots [2.2447085410328103]
Rescriber is a browser extension that supports user-led data minimization in LLM-based conversational agents.<n>Our studies showed that Rescriber helped users reduce unnecessary disclosure and addressed their privacy concerns.<n>Our findings confirm the viability of smaller-LLM-powered, user-facing, on-device privacy controls.
arXiv Detail & Related papers (2024-10-10T01:23:16Z) - Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online [5.365346373228897]
Malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes.<n>With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations.<n>We analyze the value of a new tool to address this challenge: "personhood credentials" (PHCs)<n>PHCs empower users to demonstrate that they are real people -- not AIs -- to online services, without disclosing any personal information.
arXiv Detail & Related papers (2024-08-15T02:41:25Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Human intuition as a defense against attribute inference [4.916067949075847]
Attribute inference has become a major threat to privacy.
One way to tackle this threat is to strategically modify one's publicly available data in order to keep one's private information hidden from attribute inference.
We evaluate people's ability to perform this task, and compare it against algorithms designed for this purpose.
arXiv Detail & Related papers (2023-04-24T06:54:17Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - SPAct: Self-supervised Privacy Preservation for Action Recognition [73.79886509500409]
Existing approaches for mitigating privacy leakage in action recognition require privacy labels along with the action labels from the video dataset.
Recent developments of self-supervised learning (SSL) have unleashed the untapped potential of the unlabeled data.
We present a novel training framework which removes privacy information from input video in a self-supervised manner without requiring privacy labels.
arXiv Detail & Related papers (2022-03-29T02:56:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.