Learning Unbiased Representations via R\'enyi Minimization
- URL: http://arxiv.org/abs/2009.03183v1
- Date: Mon, 7 Sep 2020 15:48:24 GMT
- Title: Learning Unbiased Representations via R\'enyi Minimization
- Authors: Vincent Grari, Oualid El Hajouji, Sylvain Lamprier, Marcin Detyniecki
- Abstract summary: We propose an adversarial algorithm to learn unbiased representations via the Hirschfeld-Gebel-Renyi (HGR) maximal correlation coefficient.
We empirically evaluate and compare our approach and demonstrate significant improvements over existing works in the field.
- Score: 13.61565693336172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, significant work has been done to include fairness
constraints in the training objective of machine learning algorithms. Many
state-of the-art algorithms tackle this challenge by learning a fair
representation which captures all the relevant information to predict the
output Y while not containing any information about a sensitive attribute S. In
this paper, we propose an adversarial algorithm to learn unbiased
representations via the Hirschfeld-Gebelein-Renyi (HGR) maximal correlation
coefficient. We leverage recent work which has been done to estimate this
coefficient by learning deep neural network transformations and use it as a
minmax game to penalize the intrinsic bias in a multi dimensional latent
representation. Compared to other dependence measures, the HGR coefficient
captures more information about the non-linear dependencies with the sensitive
variable, making the algorithm more efficient in mitigating bias in the
representation. We empirically evaluate and compare our approach and
demonstrate significant improvements over existing works in the field.
Related papers
- Automatic debiasing of neural networks via moment-constrained learning [0.0]
Naively learning the regression function and taking a sample mean of the target functional results in biased estimators.
We propose moment-constrained learning as a new RR learning approach that addresses some shortcomings in automatic debiasing.
arXiv Detail & Related papers (2024-09-29T20:56:54Z) - Performance Evaluation and Comparison of a New Regression Algorithm [4.125187280299247]
We compare the performance of a newly proposed regression algorithm against four conventional machine learning algorithms.
The reader is free to replicate our results since we have provided the source code in a GitHub repository.
arXiv Detail & Related papers (2023-06-15T13:01:16Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - R\'enyiCL: Contrastive Representation Learning with Skew R\'enyi
Divergence [78.15455360335925]
We present a new robust contrastive learning scheme, coined R'enyiCL, which can effectively manage harder augmentations.
Our method is built upon the variational lower bound of R'enyi divergence.
We show that R'enyi contrastive learning objectives perform innate hard negative sampling and easy positive sampling simultaneously.
arXiv Detail & Related papers (2022-08-12T13:37:05Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - Relational Surrogate Loss Learning [41.61184221367546]
This paper revisits the surrogate loss learning, where a deep neural network is employed to approximate the evaluation metrics.
In this paper, we show that directly maintaining the relation of models between surrogate losses and metrics suffices.
Our method is much easier to optimize and enjoys significant efficiency and performance gains.
arXiv Detail & Related papers (2022-02-26T17:32:57Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.