Latent Space Smoothing for Individually Fair Representations
- URL: http://arxiv.org/abs/2111.13650v1
- Date: Fri, 26 Nov 2021 18:22:42 GMT
- Title: Latent Space Smoothing for Individually Fair Representations
- Authors: Momchil Peychev, Anian Ruoss, Mislav Balunovi\'c, Maximilian Baader,
Martin Vechev
- Abstract summary: We introduce LASSI, the first representation learning method for certifying individual fairness of high-dimensional data.
Our key insight is to leverage recent advances in generative modeling to capture the set of similar individuals in the generative latent space.
We employ randomized smoothing to provably map similar individuals close together, in turn ensuring that local robustness verification of the downstream application results in end-to-end fairness certification.
- Score: 12.739528232133495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fair representation learning encodes user data to ensure fairness and
utility, regardless of the downstream application. However, learning
individually fair representations, i.e., guaranteeing that similar individuals
are treated similarly, remains challenging in high-dimensional settings such as
computer vision. In this work, we introduce LASSI, the first representation
learning method for certifying individual fairness of high-dimensional data.
Our key insight is to leverage recent advances in generative modeling to
capture the set of similar individuals in the generative latent space. This
allows learning individually fair representations where similar individuals are
mapped close together, by using adversarial training to minimize the distance
between their representations. Finally, we employ randomized smoothing to
provably map similar individuals close together, in turn ensuring that local
robustness verification of the downstream application results in end-to-end
fairness certification. Our experimental evaluation on challenging real-world
image data demonstrates that our method increases certified individual fairness
by up to 60%, without significantly affecting task utility.
Related papers
- Balancing the Scales: Enhancing Fairness in Facial Expression Recognition with Latent Alignment [5.784550537553534]
This workleverages representation learning based on latent spaces to mitigate bias in facial expression recognition systems.
It also enhances a deep learning model's fairness and overall accuracy.
arXiv Detail & Related papers (2024-10-25T10:03:10Z) - Toward Fairer Face Recognition Datasets [69.04239222633795]
Face recognition and verification are computer vision tasks whose performance has progressed with the introduction of deep representations.
Ethical, legal, and technical challenges due to the sensitive character of face data and biases in real training datasets hinder their development.
We promote fairness by introducing a demographic attributes balancing mechanism in generated training datasets.
arXiv Detail & Related papers (2024-06-24T12:33:21Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - SoFaiR: Single Shot Fair Representation Learning [24.305894478899948]
SoFaiR is a single shot fair representation learning method that generates with one trained model many points on the fairness-information plane.
We find on three datasets that SoFaiR achieves similar fairness-information trade-offs as its multi-shot counterparts.
arXiv Detail & Related papers (2022-04-26T19:31:30Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Adversarial Stacked Auto-Encoders for Fair Representation Learning [1.061960673667643]
We propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation.
Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.
arXiv Detail & Related papers (2021-07-27T13:49:18Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Learning Smooth and Fair Representations [24.305894478899948]
This paper explores the ability to preemptively remove the correlations between features and sensitive attributes by mapping features to a fair representation space.
Empirically, we find that smoothing the representation distribution provides generalization guarantees of fairness certificates.
We do not observe that smoothing the representation distribution degrades the accuracy of downstream tasks compared to state-of-the-art methods in fair representation learning.
arXiv Detail & Related papers (2020-06-15T21:51:50Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Learning Certified Individually Fair Representations [15.416929083117596]
A desirable family of fairness constraints, each requiring similar treatment for similar individuals, is known as individual fairness.
We introduce the first method that enables data consumers to obtain certificates of individual fairness for existing and new data points.
arXiv Detail & Related papers (2020-02-24T15:41:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.