Not My Agent, Not My Boundary? Elicitation of Personal Privacy Boundaries in AI-Delegated Information Sharing
- URL: http://arxiv.org/abs/2509.21712v1
- Date: Fri, 26 Sep 2025 00:20:30 GMT
- Title: Not My Agent, Not My Boundary? Elicitation of Personal Privacy Boundaries in AI-Delegated Information Sharing
- Authors: Bingcan Guo, Eryue Xu, Zhiping Zhang, Tianshi Li,
- Abstract summary: We present an AI-powered elicitation approach that probes individuals' privacy boundaries through a discriminative task.<n>Our findings highlight the importance of situating privacy preference elicitation within real-world data flows.
- Score: 4.689683234869851
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Aligning AI systems with human privacy preferences requires understanding individuals' nuanced disclosure behaviors beyond general norms. Yet eliciting such boundaries remains challenging due to the context-dependent nature of privacy decisions and the complex trade-offs involved. We present an AI-powered elicitation approach that probes individuals' privacy boundaries through a discriminative task. We conducted a between-subjects study that systematically varied communication roles and delegation conditions, resulting in 1,681 boundary specifications from 169 participants for 61 scenarios. We examined how these contextual factors and individual differences influence the boundary specification. Quantitative results show that communication roles influence individuals' acceptance of detailed and identifiable disclosure, AI delegation and individuals' need for privacy heighten sensitivity to disclosed identifiers, and AI delegation results in less consensus across individuals. Our findings highlight the importance of situating privacy preference elicitation within real-world data flows. We advocate using nuanced privacy boundaries as an alignment goal for future AI systems.
Related papers
- From Fragmentation to Integration: Exploring the Design Space of AI Agents for Human-as-the-Unit Privacy Management [3.23081177224515]
We investigate users' cross-context privacy challenges through 12 semi-structured interviews.<n>Results reveal that people rely on ad hoc manual strategies while lacking comprehensive privacy controls.<n>To explore solutions, we generated nine AI agent concepts and evaluated them via a speed-dating survey with 116 US participants.
arXiv Detail & Related papers (2026-02-04T20:12:37Z) - "We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe [56.1653658714305]
We interviewed 25 AI developers based in Europe to understand which privacy threats they believe pose the greatest risk to users, developers, and businesses.<n>We find that there is little consensus among AI developers on the relative ranking of privacy risks.<n>While AI developers are aware of proposed mitigation strategies for addressing these risks, they reported minimal real-world adoption.
arXiv Detail & Related papers (2025-10-01T13:51:33Z) - Countering Privacy Nihilism [2.6212127510234797]
AI may be presumed capable of inferring "everything from everything"<n>Discarding data categories as a normative anchoring in privacy and data protection is what we call privacy nihilism.<n>We propose moving away from privacy frameworks that focus solely on data type, neglecting all other factors.
arXiv Detail & Related papers (2025-07-24T09:52:18Z) - Privacy Risks and Preservation Methods in Explainable Artificial Intelligence: A Scoping Review [1.4019930224097232]
We conduct a scoping review of existing literature to elicit details on the conflict between privacy and explainability.<n>We extracted 57 articles from 1,943 studies published from January 2019 to December 2024.<n>We categorize the privacy risks and preservation methods in XAI and propose the characteristics of privacy preserving explanations.
arXiv Detail & Related papers (2025-05-05T17:53:28Z) - Identifying Privacy Personas [27.301741710016223]
Privacy personas capture the differences in user segments with respect to one's knowledge, behavioural patterns, level of self-efficacy, and perception of the importance of privacy protection.
While various privacy personas have been derived in the literature, they group together people who differ from each other in terms of important attributes.
We propose eight personas that we derive by combining qualitative and quantitative analysis of the responses to an interactive educational questionnaire.
arXiv Detail & Related papers (2024-10-17T20:49:46Z) - AI Delegates with a Dual Focus: Ensuring Privacy and Strategic Self-Disclosure [40.47039082007319]
Large language model (LLM)-based AI delegates are increasingly utilized to act on behalf of users.<n>We propose a novel AI delegate system that enables privacy-conscious self-disclosure.<n>Our user study demonstrates that the proposed AI delegate strategically protects privacy, pioneering its use in diverse and dynamic social interactions.
arXiv Detail & Related papers (2024-09-26T08:45:15Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.<n>We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.<n>State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Differentially Private Distributed Inference [2.4401219403555814]
Healthcare centers collaborating on clinical trials must balance knowledge sharing with safeguarding sensitive patient data.<n>We address this challenge by using differential privacy (DP) to control information leakage.<n>Agents update belief statistics via log-linear rules, and DP noise provides plausible deniability and rigorous performance guarantees.
arXiv Detail & Related papers (2024-02-13T01:38:01Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Decision Making with Differential Privacy under a Fairness Lens [65.16089054531395]
The U.S. Census Bureau releases data sets and statistics about groups of individuals that are used as input to a number of critical decision processes.
To conform to privacy and confidentiality requirements, these agencies are often required to release privacy-preserving versions of the data.
This paper studies the release of differentially private data sets and analyzes their impact on some critical resource allocation tasks under a fairness perspective.
arXiv Detail & Related papers (2021-05-16T21:04:19Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.