When Fairness Meets Privacy: Fair Classification with Semi-Private
Sensitive Attributes
- URL: http://arxiv.org/abs/2207.08336v2
- Date: Mon, 29 May 2023 19:57:52 GMT
- Title: When Fairness Meets Privacy: Fair Classification with Semi-Private
Sensitive Attributes
- Authors: Canyu Chen, Yueqing Liang, Xiongxiao Xu, Shangyu Xie, Ashish Kundu,
Ali Payani, Yuan Hong, Kai Shu
- Abstract summary: We study a novel and practical problem of fair classification in a semi-private setting.
Most of the sensitive attributes are private and only a small amount of clean ones are available.
We propose a novel framework FairSP that can achieve Fair prediction under the Semi-Private setting.
- Score: 18.221858247218726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models have demonstrated promising performance in many
areas. However, the concerns that they can be biased against specific
demographic groups hinder their adoption in high-stake applications. Thus, it
is essential to ensure fairness in machine learning models. Most previous
efforts require direct access to sensitive attributes for mitigating bias.
Nonetheless, it is often infeasible to obtain large-scale users' sensitive
attributes considering users' concerns about privacy in the data collection
process. Privacy mechanisms such as local differential privacy (LDP) are widely
enforced on sensitive information in the data collection stage due to legal
compliance and people's increasing awareness of privacy. Therefore, a critical
problem is how to make fair predictions under privacy. We study a novel and
practical problem of fair classification in a semi-private setting, where most
of the sensitive attributes are private and only a small amount of clean ones
are available. To this end, we propose a novel framework FairSP that can
achieve Fair prediction under the Semi-Private setting. First, FairSP learns to
correct the noise-protected sensitive attributes by exploiting the limited
clean sensitive attributes. Then, it jointly models the corrected and clean
data in an adversarial way for debiasing and prediction. Theoretical analysis
shows that the proposed model can ensure fairness under mild assumptions in the
semi-private setting. Extensive experimental results on real-world datasets
demonstrate the effectiveness of our method for making fair predictions under
privacy and maintaining high accuracy.
Related papers
- Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fair Spatial Indexing: A paradigm for Group Spatial Fairness [6.640563753223598]
We propose techniques to mitigate location bias in machine learning.
We focus on spatial group fairness and we propose a spatial indexing algorithm that accounts for fairness.
arXiv Detail & Related papers (2023-02-05T05:15:11Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute
Inference Attacks [0.5801044612920815]
We propose Dikaios, a privacy auditing tool for fairness algorithms for model builders.
We show that our attribute inference attacks with adaptive prediction threshold significantly outperform prior attacks.
arXiv Detail & Related papers (2022-02-04T17:19:59Z) - Towards a Data Privacy-Predictive Performance Trade-off [2.580765958706854]
We evaluate the existence of a trade-off between data privacy and predictive performance in classification tasks.
Unlike previous literature, we confirm that the higher the level of privacy, the higher the impact on predictive performance.
arXiv Detail & Related papers (2022-01-13T21:48:51Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.