Learning Certified Individually Fair Representations
- URL: http://arxiv.org/abs/2002.10312v2
- Date: Sat, 28 Nov 2020 18:17:25 GMT
- Title: Learning Certified Individually Fair Representations
- Authors: Anian Ruoss, Mislav Balunovi\'c, Marc Fischer, and Martin Vechev
- Abstract summary: A desirable family of fairness constraints, each requiring similar treatment for similar individuals, is known as individual fairness.
We introduce the first method that enables data consumers to obtain certificates of individual fairness for existing and new data points.
- Score: 15.416929083117596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fair representation learning provides an effective way of enforcing fairness
constraints without compromising utility for downstream users. A desirable
family of such fairness constraints, each requiring similar treatment for
similar individuals, is known as individual fairness. In this work, we
introduce the first method that enables data consumers to obtain certificates
of individual fairness for existing and new data points. The key idea is to map
similar individuals to close latent representations and leverage this latent
proximity to certify individual fairness. That is, our method enables the data
producer to learn and certify a representation where for a data point all
similar individuals are at $\ell_\infty$-distance at most $\epsilon$, thus
allowing data consumers to certify individual fairness by proving
$\epsilon$-robustness of their classifier. Our experimental evaluation on five
real-world datasets and several fairness constraints demonstrates the
expressivity and scalability of our approach.
Related papers
- Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - SoFaiR: Single Shot Fair Representation Learning [24.305894478899948]
SoFaiR is a single shot fair representation learning method that generates with one trained model many points on the fairness-information plane.
We find on three datasets that SoFaiR achieves similar fairness-information trade-offs as its multi-shot counterparts.
arXiv Detail & Related papers (2022-04-26T19:31:30Z) - Achieving Fairness at No Utility Cost via Data Reweighing with Influence [27.31236521189165]
We propose a data reweighing approach that only adjusts the weight for samples in the training phase.
We granularly model the influence of each training sample with regard to fairness-related quantity and predictive utility.
Our approach can empirically release the tradeoff and obtain cost-free fairness for equal opportunity.
arXiv Detail & Related papers (2022-02-01T22:12:17Z) - Latent Space Smoothing for Individually Fair Representations [12.739528232133495]
We introduce LASSI, the first representation learning method for certifying individual fairness of high-dimensional data.
Our key insight is to leverage recent advances in generative modeling to capture the set of similar individuals in the generative latent space.
We employ randomized smoothing to provably map similar individuals close together, in turn ensuring that local robustness verification of the downstream application results in end-to-end fairness certification.
arXiv Detail & Related papers (2021-11-26T18:22:42Z) - Adversarial Stacked Auto-Encoders for Fair Representation Learning [1.061960673667643]
We propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation.
Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.
arXiv Detail & Related papers (2021-07-27T13:49:18Z) - Fair Densities via Boosting the Sufficient Statistics of Exponential
Families [72.34223801798422]
We introduce a boosting algorithm to pre-process data for fairness.
Our approach shifts towards better data fitting while still ensuring a minimal fairness guarantee.
Empirical results are present to display the quality of result on real-world data.
arXiv Detail & Related papers (2020-12-01T00:49:17Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z) - Learning Smooth and Fair Representations [24.305894478899948]
This paper explores the ability to preemptively remove the correlations between features and sensitive attributes by mapping features to a fair representation space.
Empirically, we find that smoothing the representation distribution provides generalization guarantees of fairness certificates.
We do not observe that smoothing the representation distribution degrades the accuracy of downstream tasks compared to state-of-the-art methods in fair representation learning.
arXiv Detail & Related papers (2020-06-15T21:51:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.