Fair Interpretable Learning via Correction Vectors
- URL: http://arxiv.org/abs/2201.06343v1
- Date: Mon, 17 Jan 2022 10:59:33 GMT
- Title: Fair Interpretable Learning via Correction Vectors
- Authors: Mattia Cerrato and Marius K\"oppel and Alexander Segner and Stefan
Kramer
- Abstract summary: We propose a new framework for fair representation learning centered around the learning of "correction vectors"
The corrections are then simply summed up to the original features, and can therefore be analyzed as an explicit penalty or bonus to each feature.
We show experimentally that a fair representation learning problem constrained in such a way does not impact performance.
- Score: 68.29997072804537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural network architectures have been extensively employed in the fair
representation learning setting, where the objective is to learn a new
representation for a given vector which is independent of sensitive
information. Various "representation debiasing" techniques have been proposed
in the literature. However, as neural networks are inherently opaque, these
methods are hard to comprehend, which limits their usefulness. We propose a new
framework for fair representation learning which is centered around the
learning of "correction vectors", which have the same dimensionality as the
given data vectors. The corrections are then simply summed up to the original
features, and can therefore be analyzed as an explicit penalty or bonus to each
feature. We show experimentally that a fair representation learning problem
constrained in such a way does not impact performance.
Related papers
- Learning sparse features can lead to overfitting in neural networks [9.2104922520782]
We show that feature learning can perform worse than lazy training.
Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth.
arXiv Detail & Related papers (2022-06-24T14:26:33Z) - Fair Interpretable Representation Learning with Correction Vectors [60.0806628713968]
We propose a new framework for fair representation learning that is centered around the learning of "correction vectors"
We show experimentally that several fair representation learning models constrained in such a way do not exhibit losses in ranking or classification performance.
arXiv Detail & Related papers (2022-02-07T11:19:23Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Projection-wise Disentangling for Fair and Interpretable Representation
Learning: Application to 3D Facial Shape Analysis [4.716274324450199]
Confounding bias is a crucial problem when applying machine learning to practice, especially in clinical practice.
We consider the problem of learning representations independent to multiple biases.
We propose to mitigate the bias while keeping almost all information in the latent representations, which enables us to observe and interpret them as well.
arXiv Detail & Related papers (2021-06-25T16:09:56Z) - Learning to Ignore: Fair and Task Independent Representations [0.7106986689736827]
In this work we show that they can be seen as a common framework of learning invariant representations.
The representations should allow to predict the target while at the same time being invariant to sensitive attributes which split the dataset into subgroups.
Our approach is based on the simple observation that it is impossible for any learning algorithm to differentiate samples if they have the same feature representation.
arXiv Detail & Related papers (2021-01-11T17:33:18Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Predicting What You Already Know Helps: Provable Self-Supervised
Learning [60.27658820909876]
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data.
We show a mechanism exploiting the statistical connections between certain em reconstruction-based pretext tasks that guarantee to learn a good representation.
We prove the linear layer yields small approximation error even for complex ground truth function class.
arXiv Detail & Related papers (2020-08-03T17:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.