When PETs misbehave: A Contextual Integrity analysis
- URL: http://arxiv.org/abs/2312.02509v1
- Date: Tue, 5 Dec 2023 05:27:43 GMT
- Title: When PETs misbehave: A Contextual Integrity analysis
- Authors: Ero Balsa and Yan Shvartzshnaider
- Abstract summary: We use the theory of Contextual Integrity to explain how privacy technologies may be misused to erode privacy.
We consider three PETs and scenarios: anonymous credentials for age verification, client-side scanning for illegal content detection, and homomorphic encryption for machine learning model training.
- Score: 0.7397067779113841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy enhancing technologies, or PETs, have been hailed as a promising
means to protect privacy without compromising on the functionality of digital
services. At the same time, and partly because they may encode a narrow
conceptualization of privacy as confidentiality that is popular among
policymakers, engineers and the public, PETs risk being co-opted to promote
privacy-invasive practices. In this paper, we resort to the theory of
Contextual Integrity to explain how privacy technologies may be misused to
erode privacy. To illustrate, we consider three PETs and scenarios: anonymous
credentials for age verification, client-side scanning for illegal content
detection, and homomorphic encryption for machine learning model training.
Using the theory of Contextual Integrity, we reason about the notion of privacy
that these PETs encode, and show that CI enables us to identify and reason
about the limitations of PETs and their misuse, and which may ultimately lead
to privacy violations.
Related papers
- Smoke Screens and Scapegoats: The Reality of General Data Protection Regulation Compliance -- Privacy and Ethics in the Case of Replika AI [1.325665193924634]
This paper takes a critical approach towards examining the intricacies of these issues within AI companion services.
We analyze articles from public media about the company and its practices to gain insight into the trustworthiness of information provided in the policy.
The results reveal despite privacy notices, data collection practices might harvest personal data without users' full awareness.
arXiv Detail & Related papers (2024-11-07T07:36:19Z) - PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action [54.11479432110771]
PrivacyLens is a novel framework designed to extend privacy-sensitive seeds into expressive vignettes and further into agent trajectories.
We instantiate PrivacyLens with a collection of privacy norms grounded in privacy literature and crowdsourced seeds.
State-of-the-art LMs, like GPT-4 and Llama-3-70B, leak sensitive information in 25.68% and 38.69% of cases, even when prompted with privacy-enhancing instructions.
arXiv Detail & Related papers (2024-08-29T17:58:38Z) - Privacy Checklist: Privacy Violation Detection Grounding on Contextual Integrity Theory [43.12744258781724]
We formulate the privacy issue as a reasoning problem rather than simple pattern matching.
We develop the first comprehensive checklist that covers social identities, private attributes, and existing privacy regulations.
arXiv Detail & Related papers (2024-08-19T14:48:04Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Mitigating Sovereign Data Exchange Challenges: A Mapping to Apply
Privacy- and Authenticity-Enhancing Technologies [67.34625604583208]
Authenticity Enhancing Technologies (AETs) and Privacy-Enhancing Technologies (PETs) are considered to engage in Sovereign Data Exchange (SDE)
PETs and AETs are technically complex, which impedes their adoption.
This study empirically constructs a challenge-oriented technology mapping.
arXiv Detail & Related papers (2022-06-20T08:16:42Z) - The Evolving Path of "the Right to Be Left Alone" - When Privacy Meets
Technology [0.0]
This paper proposes a novel vision of the privacy ecosystem, introducing privacy dimensions, the related users' expectations, the privacy violations, and the changing factors.
We believe that promising approaches to tackle the privacy challenges move in two directions: (i) identification of effective privacy metrics; and (ii) adoption of formal tools to design privacy-compliant applications.
arXiv Detail & Related papers (2021-11-24T11:27:55Z) - Equity and Privacy: More Than Just a Tradeoff [10.545898004301323]
Recent work has shown that privacy preserving data publishing can introduce different levels of utility across different population groups.
Will marginal populations see disproportionately less utility from privacy technology?
If there is an inequity how can we address it?
arXiv Detail & Related papers (2021-11-08T17:39:32Z) - Usage Patterns of Privacy-Enhancing Technologies [6.09170287691728]
This paper contributes to privacy research by eliciting use and perception of use across $43$ privacy methods.
Non-technology methods are among the most used methods in the US, the UK and Germany.
This research provides a broad understanding of use and perceptions across a collection of PETs, and can lead to future research for scaling use of PETs.
arXiv Detail & Related papers (2020-09-22T02:17:37Z) - A vision for global privacy bridges: Technical and legal measures for
international data markets [77.34726150561087]
Despite data protection laws and an acknowledged right to privacy, trading personal information has become a business equated with "trading oil"
An open conflict is arising between business demands for data and a desire for privacy.
We propose and test a vision of a personal information market with privacy.
arXiv Detail & Related papers (2020-05-13T13:55:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.