Achieving Transparency Report Privacy in Linear Time
- URL: http://arxiv.org/abs/2104.00137v1
- Date: Wed, 31 Mar 2021 22:05:10 GMT
- Title: Achieving Transparency Report Privacy in Linear Time
- Authors: Chien-Lun Chen, Leana Golubchik, Ranjan Pal
- Abstract summary: We first investigate and demonstrate potential privacy hazards brought on by the deployment of transparency and fairness measures in released ATRs.
We then propose a linear-time optimal-privacy scheme, built upon standard linear fractional programming (LFP) theory, for announcing ATRs.
We quantify the privacy-utility trade-offs induced by our scheme, and analyze the impact of privacy perturbation on fairness measures in ATRs.
- Score: 1.9981375888949475
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: An accountable algorithmic transparency report (ATR) should ideally
investigate the (a) transparency of the underlying algorithm, and (b) fairness
of the algorithmic decisions, and at the same time preserve data subjects'
privacy. However, a provably formal study of the impact to data subjects'
privacy caused by the utility of releasing an ATR (that investigates
transparency and fairness), is yet to be addressed in the literature. The
far-fetched benefit of such a study lies in the methodical characterization of
privacy-utility trade-offs for release of ATRs in public, and their
consequential application-specific impact on the dimensions of society,
politics, and economics. In this paper, we first investigate and demonstrate
potential privacy hazards brought on by the deployment of transparency and
fairness measures in released ATRs. To preserve data subjects' privacy, we then
propose a linear-time optimal-privacy scheme, built upon standard linear
fractional programming (LFP) theory, for announcing ATRs, subject to
constraints controlling the tolerance of privacy perturbation on the utility of
transparency schemes. Subsequently, we quantify the privacy-utility trade-offs
induced by our scheme, and analyze the impact of privacy perturbation on
fairness measures in ATRs. To the best of our knowledge, this is the first
analytical work that simultaneously addresses trade-offs between the triad of
privacy, utility, and fairness, applicable to algorithmic transparency reports.
Related papers
- Data Obfuscation through Latent Space Projection (LSP) for Privacy-Preserving AI Governance: Case Studies in Medical Diagnosis and Finance Fraud Detection [0.0]
This paper introduces Data Obfuscation through Latent Space Projection (LSP), a novel technique aimed at enhancing AI governance and ensuring Responsible AI compliance.
LSP uses machine learning to project sensitive data into a latent space, effectively obfuscating it while preserving essential features for model training and inference.
We validate LSP's effectiveness through experiments on benchmark datasets and two real-world case studies: healthcare cancer diagnosis and financial fraud analysis.
arXiv Detail & Related papers (2024-10-22T22:31:03Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Synthetic Data: Revisiting the Privacy-Utility Trade-off [4.832355454351479]
An article stated that synthetic data does not provide a better trade-off between privacy and utility than traditional anonymization techniques.
The article also claims to have identified a breach in the differential privacy guarantees provided by PATEGAN and PrivBayes.
We analyzed the implementation of the privacy game described in the article and found that it operated in a highly specialized and constrained environment.
arXiv Detail & Related papers (2024-07-09T14:48:43Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Privacy for Fairness: Information Obfuscation for Fair Representation
Learning with Local Differential Privacy [26.307780067808565]
This study introduces a theoretical framework that enables a comprehensive examination of the interplay between privacy and fairness.
We shall develop and analyze an information bottleneck (IB) based information obfuscation method with local differential privacy (LDP) for fair representation learning.
In contrast to many empirical studies on fairness in ML, we show that the incorporation of LDP randomizers during the encoding process can enhance the fairness of the learned representation.
arXiv Detail & Related papers (2024-02-16T06:35:10Z) - A Summary of Privacy-Preserving Data Publishing in the Local Setting [0.6749750044497732]
Statistical Disclosure Control aims to minimize the risk of exposing confidential information by de-identifying it.
We outline the current privacy-preserving techniques employed in microdata de-identification, delve into privacy measures tailored for various disclosure scenarios, and assess metrics for information loss and predictive performance.
arXiv Detail & Related papers (2023-12-19T04:23:23Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Post-processing of Differentially Private Data: A Fairness Perspective [53.29035917495491]
This paper shows that post-processing causes disparate impacts on individuals or groups.
It analyzes two critical settings: the release of differentially private datasets and the use of such private datasets for downstream decisions.
It proposes a novel post-processing mechanism that is (approximately) optimal under different fairness metrics.
arXiv Detail & Related papers (2022-01-24T02:45:03Z) - On the Privacy-Utility Tradeoff in Peer-Review Data Analysis [34.0435377376779]
A major impediment to research on improving peer review is the unavailability of peer-review data.
We propose a framework for privacy-preserving release of certain conference peer-review data.
arXiv Detail & Related papers (2020-06-29T21:08:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.