On the Alignment of Group Fairness with Attribute Privacy
- URL: http://arxiv.org/abs/2211.10209v3
- Date: Tue, 5 Mar 2024 07:51:14 GMT
- Title: On the Alignment of Group Fairness with Attribute Privacy
- Authors: Jan Aalmoes and Vasisht Duddu and Antoine Boutet
- Abstract summary: Group fairness and privacy are fundamental aspects in designing trustworthy machine learning models.
We are the first to demonstrate the alignment of group fairness with the specific privacy notion of attribute privacy in a blackbox setting.
- Score: 1.6574413179773757
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Group fairness and privacy are fundamental aspects in designing trustworthy
machine learning models. Previous research has highlighted conflicts between
group fairness and different privacy notions. We are the first to demonstrate
the alignment of group fairness with the specific privacy notion of attribute
privacy in a blackbox setting. Attribute privacy, quantified by the resistance
to attribute inference attacks (AIAs), requires indistinguishability in the
target model's output predictions. Group fairness guarantees this thereby
mitigating AIAs and achieving attribute privacy. To demonstrate this, we first
introduce AdaptAIA, an enhancement of existing AIAs, tailored for real-world
datasets with class imbalances in sensitive attributes. Through theoretical and
extensive empirical analyses, we demonstrate the efficacy of two standard group
fairness algorithms (i.e., adversarial debiasing and exponentiated gradient
descent) against AdaptAIA. Additionally, since using group fairness results in
attribute privacy, it acts as a defense against AIAs, which is currently
lacking. Overall, we show that group fairness aligns with attribute privacy at
no additional cost other than the already existing trade-off with model
utility.
Related papers
- Do Fairness Interventions Come at the Cost of Privacy: Evaluations for Binary Classifiers [17.243744418309593]
We assess the privacy risks of fairness-enhanced binary classifiers via membership inference attacks (MIAs) and attribute inference attacks (AIAs)
We uncover a potential threat mechanism that exploits prediction discrepancies between fair and biased models, leading to advanced attack results for both MIAs and AIAs.
Our study exposes the under-explored privacy threats in fairness studies, advocating for thorough evaluations of potential security vulnerabilities before model deployments.
arXiv Detail & Related papers (2025-03-08T10:21:21Z) - Group fairness without demographics using social networks [29.073125057536014]
Group fairness is a popular approach to prevent unfavorable treatment of individuals based on sensitive attributes such as race, gender, and disability.
We propose a "group-free" measure of fairness that does not rely on sensitive attributes and, instead, is based on homophily in social networks.
arXiv Detail & Related papers (2023-05-19T00:45:55Z) - Analysing Fairness of Privacy-Utility Mobility Models [11.387235721659378]
This work defines a set of fairness metrics designed explicitly for human mobility.
We examine the fairness of two state-of-the-art privacy-preserving models that rely on GAN and representation learning to reduce the re-identification rate of users for data sharing.
Our results show that while both models guarantee group fairness in terms of demographic parity, they violate individual fairness criteria, indicating that users with highly similar trajectories receive disparate privacy gain.
arXiv Detail & Related papers (2023-04-10T11:09:18Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Can Querying for Bias Leak Protected Attributes? Achieving Privacy With
Smooth Sensitivity [14.564919048514897]
Existing regulations prohibit model developers from accessing protected attributes.
We show that simply querying for fairness metrics can leak the protected attributes of individuals to the model developers.
We propose Attribute-Conceal, a novel technique that achieves differential privacy by calibrating noise to the smooth sensitivity of our bias query.
arXiv Detail & Related papers (2022-11-03T20:44:48Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - When Fairness Meets Privacy: Fair Classification with Semi-Private
Sensitive Attributes [18.221858247218726]
We study a novel and practical problem of fair classification in a semi-private setting.
Most of the sensitive attributes are private and only a small amount of clean ones are available.
We propose a novel framework FairSP that can achieve Fair prediction under the Semi-Private setting.
arXiv Detail & Related papers (2022-07-18T01:10:25Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute
Inference Attacks [0.5801044612920815]
We propose Dikaios, a privacy auditing tool for fairness algorithms for model builders.
We show that our attribute inference attacks with adaptive prediction threshold significantly outperform prior attacks.
arXiv Detail & Related papers (2022-02-04T17:19:59Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.