Technocracy, pseudoscience and performative compliance: the risks of
privacy risk assessments. Lessons from NIST's Privacy Risk Assessment
Methodology
- URL: http://arxiv.org/abs/2310.05936v1
- Date: Thu, 24 Aug 2023 01:32:35 GMT
- Title: Technocracy, pseudoscience and performative compliance: the risks of
privacy risk assessments. Lessons from NIST's Privacy Risk Assessment
Methodology
- Authors: Ero Balsa
- Abstract summary: Privacy risk assessments have been touted as an objective, principled way to encourage organizations to implement privacy-by-design.
Existing guidelines and methods remain vague, and there is little empirical evidence on privacy harms.
We highlight the limitations and pitfalls of what is essentially a utilitarian and technocratic approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Privacy risk assessments have been touted as an objective, principled way to
encourage organizations to implement privacy-by-design. They are central to a
new regulatory model of collaborative governance, as embodied by the GDPR.
However, existing guidelines and methods remain vague, and there is little
empirical evidence on privacy harms. In this paper we conduct a close analysis
of US NIST's Privacy Risk Assessment Methodology, highlighting multiple sites
of discretion that create countless opportunities for adversarial organizations
to engage in performative compliance. Our analysis shows that the premises on
which the success of privacy risk assessments depends do not hold, particularly
in regard to organizations' incentives and regulators auditing capabilities. We
highlight the limitations and pitfalls of what is essentially a utilitarian and
technocratic approach, leading us to discuss alternatives and a realignment of
our policy and research objectives.
Related papers
- How Privacy-Savvy Are Large Language Models? A Case Study on Compliance and Privacy Technical Review [15.15468770348023]
We evaluate large language models' performance in privacy-related tasks such as privacy information extraction (PIE), legal and regulatory key point detection (KPD), and question answering (QA)
Through an empirical assessment, we investigate the capacity of several prominent LLMs, including BERT, GPT-3.5, GPT-4, and custom models, in executing privacy compliance checks and technical privacy reviews.
While LLMs show promise in automating privacy reviews and identifying regulatory discrepancies, significant gaps persist in their ability to fully comply with evolving legal standards.
arXiv Detail & Related papers (2024-09-04T01:51:37Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - GoldCoin: Grounding Large Language Models in Privacy Laws via Contextual Integrity Theory [44.297102658873726]
Existing research studies privacy by exploring various privacy attacks, defenses, and evaluations within narrowly predefined patterns.
We introduce a novel framework, GoldCoin, designed to efficiently ground LLMs in privacy laws for judicial assessing privacy violations.
Our framework leverages the theory of contextual integrity as a bridge, creating numerous synthetic scenarios grounded in relevant privacy statutes.
arXiv Detail & Related papers (2024-06-17T02:27:32Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - On the Societal Impact of Open Foundation Models [93.67389739906561]
We focus on open foundation models, defined here as those with broadly available model weights.
We identify five distinctive properties of open foundation models that lead to both their benefits and risks.
arXiv Detail & Related papers (2024-02-27T16:49:53Z) - ZETAR: Modeling and Computational Design of Strategic and Adaptive
Compliance Policies [19.9521399287127]
We develop ZETAR, a zero-trust audit and recommendation framework, to provide a quantitative approach to model insiders' incentives.
We identify the policy separability principle and the set convexity, which enable finite-step algorithms to efficiently learn the Completely Trustworthy (CT) policy set.
Our results show that ZETAR can well adapt to insiders with different risk and compliance attitudes and significantly improve compliance.
arXiv Detail & Related papers (2022-04-05T15:46:31Z) - Privacy-Constrained Policies via Mutual Information Regularized Policy Gradients [54.98496284653234]
We consider the task of training a policy that maximizes reward while minimizing disclosure of certain sensitive state variables through the actions.
We solve this problem by introducing a regularizer based on the mutual information between the sensitive state and the actions.
We develop a model-based estimator for optimization of privacy-constrained policies.
arXiv Detail & Related papers (2020-12-30T03:22:35Z) - PCAL: A Privacy-preserving Intelligent Credit Risk Modeling Framework
Based on Adversarial Learning [111.19576084222345]
This paper proposes a framework of Privacy-preserving Credit risk modeling based on Adversarial Learning (PCAL)
PCAL aims to mask the private information inside the original dataset, while maintaining the important utility information for the target prediction task performance.
Results indicate that PCAL can learn an effective, privacy-free representation from user data, providing a solid foundation towards privacy-preserving machine learning for credit risk analysis.
arXiv Detail & Related papers (2020-10-06T07:04:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.