Privacy Reasoning in Ambiguous Contexts
- URL: http://arxiv.org/abs/2506.12241v1
- Date: Fri, 13 Jun 2025 21:42:22 GMT
- Title: Privacy Reasoning in Ambiguous Contexts
- Authors: Ren Yi, Octavian Suciu, Adria Gascon, Sarah Meiklejohn, Eugene Bagdasarian, Marco Gruteser,
- Abstract summary: We study the ability of language models to reason about appropriate information disclosure.<n>We identify context ambiguity as a crucial barrier for high performance in privacy assessments.
- Score: 4.766316946591749
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We study the ability of language models to reason about appropriate information disclosure - a central aspect of the evolving field of agentic privacy. Whereas previous works have focused on evaluating a model's ability to align with human decisions, we examine the role of ambiguity and missing context on model performance when making information-sharing decisions. We identify context ambiguity as a crucial barrier for high performance in privacy assessments. By designing Camber, a framework for context disambiguation, we show that model-generated decision rationales can reveal ambiguities and that systematically disambiguating context based on these rationales leads to significant accuracy improvements (up to 13.3\% in precision and up to 22.3\% in recall) as well as reductions in prompt sensitivity. Overall, our results indicate that approaches for context disambiguation are a promising way forward to enhance agentic privacy reasoning.
Related papers
- Preference Learning for AI Alignment: a Causal Perspective [55.2480439325792]
We frame this problem in a causal paradigm, providing the rich toolbox of causality to identify persistent challenges.<n>Inheriting from the literature of causal inference, we identify key assumptions necessary for reliable generalisation.<n>We illustrate failure modes of naive reward models and demonstrate how causally-inspired approaches can improve model robustness.
arXiv Detail & Related papers (2025-06-06T10:45:42Z) - A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation [0.0]
Advancements in image segmentation play an integral role within the broad scope of Deep Learning-based Computer Vision.<n>Uncertainty quantification has been extensively studied within this context, enabling the expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision-making.
arXiv Detail & Related papers (2024-11-25T13:26:09Z) - DePrompt: Desensitization and Evaluation of Personal Identifiable Information in Large Language Model Prompts [11.883785681042593]
DePrompt is a desensitization protection and effectiveness evaluation framework for prompt.
We integrate contextual attributes to define privacy types, achieving high-precision PII entity identification.
Our framework is adaptable to prompts and can be extended to text usability-dependent scenarios.
arXiv Detail & Related papers (2024-08-16T02:38:25Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Robust Stance Detection: Understanding Public Perceptions in Social Media [15.460495567765362]
stance detection identifies precise positions relative to a well-defined topic.
Traditional stance detection models often lag in performance when applied to new domains and topics.
A solution we present in this paper combines counterfactual data augmentation with contrastive learning to enhance the robustness of stance detection.
arXiv Detail & Related papers (2023-09-26T18:19:51Z) - Ambiguity Meets Uncertainty: Investigating Uncertainty Estimation for
Word Sense Disambiguation [5.55197751179213]
Existing supervised methods treat WSD as a classification task and have achieved remarkable performance.
This paper extensively studies uncertainty estimation (UE) on the benchmark designed for WSD.
We examine the capability of capturing data and model uncertainties by the model with the selected UE score on well-designed test scenarios and discover that the model reflects data uncertainty satisfactorily but underestimates model uncertainty.
arXiv Detail & Related papers (2023-05-22T15:18:15Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - On the Fundamental Trade-offs in Learning Invariant Representations [7.868449549351487]
We identify and determine two fundamental trade-offs between utility and semantic dependence induced by the statistical dependencies between the data and its corresponding target and semantic attributes.
We numerically quantify the trade-offs on representative problems and compare to the solutions achieved by baseline representation learning algorithms.
arXiv Detail & Related papers (2021-09-08T01:26:46Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Inverse Active Sensing: Modeling and Understanding Timely
Decision-Making [111.07204912245841]
We develop a framework for the general setting of evidence-based decision-making under endogenous, context-dependent time pressure.
We demonstrate how it enables modeling intuitive notions of surprise, suspense, and optimality in decision strategies.
arXiv Detail & Related papers (2020-06-25T02:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.