Learning to Ignore: Fair and Task Independent Representations
- URL: http://arxiv.org/abs/2101.04047v2
- Date: Thu, 21 Jan 2021 09:01:04 GMT
- Title: Learning to Ignore: Fair and Task Independent Representations
- Authors: Linda H. Boedi and Helmut Grabner
- Abstract summary: In this work we show that they can be seen as a common framework of learning invariant representations.
The representations should allow to predict the target while at the same time being invariant to sensitive attributes which split the dataset into subgroups.
Our approach is based on the simple observation that it is impossible for any learning algorithm to differentiate samples if they have the same feature representation.
- Score: 0.7106986689736827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training fair machine learning models, aiming for their interpretability and
solving the problem of domain shift has gained a lot of interest in the last
years. There is a vast amount of work addressing these topics, mostly in
separation. In this work we show that they can be seen as a common framework of
learning invariant representations. The representations should allow to predict
the target while at the same time being invariant to sensitive attributes which
split the dataset into subgroups. Our approach is based on the simple
observation that it is impossible for any learning algorithm to differentiate
samples if they have the same feature representation. This is formulated as an
additional loss (regularizer) enforcing a common feature representation across
subgroups. We apply it to learn fair models and interpret the influence of the
sensitive attribute. Furthermore it can be used for domain adaptation,
transferring knowledge and learning effectively from very few examples. In all
applications it is essential not only to learn to predict the target, but also
to learn what to ignore.
Related papers
- IMO: Greedy Layer-Wise Sparse Representation Learning for Out-of-Distribution Text Classification with Pre-trained Models [56.10157988449818]
This study focuses on a specific problem of domain generalization, where a model is trained on one source domain and tested on multiple target domains that are unseen during training.
We propose IMO: Invariant features Masks for Out-of-Distribution text classification, to achieve OOD generalization by learning invariant features.
arXiv Detail & Related papers (2024-04-21T02:15:59Z) - Distributional Shift Adaptation using Domain-Specific Features [41.91388601229745]
In open-world scenarios, streaming big data can be Out-Of-Distribution (OOD)
We propose a simple yet effective approach that relies on correlations in general regardless of whether the features are invariant or not.
Our approach uses the most confidently predicted samples identified by an OOD base model to train a new model that effectively adapts to the target domain.
arXiv Detail & Related papers (2022-11-09T04:16:21Z) - Fair Interpretable Learning via Correction Vectors [68.29997072804537]
We propose a new framework for fair representation learning centered around the learning of "correction vectors"
The corrections are then simply summed up to the original features, and can therefore be analyzed as an explicit penalty or bonus to each feature.
We show experimentally that a fair representation learning problem constrained in such a way does not impact performance.
arXiv Detail & Related papers (2022-01-17T10:59:33Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - What Should Not Be Contrastive in Contrastive Learning [110.14159883496859]
We introduce a contrastive learning framework which does not require prior knowledge of specific, task-dependent invariances.
Our model learns to capture varying and invariant factors for visual representations by constructing separate embedding spaces.
We use a multi-head network with a shared backbone which captures information across each augmentation and alone outperforms all baselines on downstream tasks.
arXiv Detail & Related papers (2020-08-13T03:02:32Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.