Null-sampling for Interpretable and Fair Representations
- URL: http://arxiv.org/abs/2008.05248v1
- Date: Wed, 12 Aug 2020 11:49:01 GMT
- Title: Null-sampling for Interpretable and Fair Representations
- Authors: Thomas Kehrenberg, Myles Bartlett, Oliver Thomas, Novi Quadrianto
- Abstract summary: We learn invariant representations, in the data domain, to achieve interpretability in algorithmic fairness.
By placing the representations into the data domain, the changes made by the model are easily examinable by human auditors.
- Score: 8.654168514863649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose to learn invariant representations, in the data domain, to achieve
interpretability in algorithmic fairness. Invariance implies a selectivity for
high level, relevant correlations w.r.t. class label annotations, and a
robustness to irrelevant correlations with protected characteristics such as
race or gender. We introduce a non-trivial setup in which the training set
exhibits a strong bias such that class label annotations are irrelevant and
spurious correlations cannot be distinguished. To address this problem, we
introduce an adversarially trained model with a null-sampling procedure to
produce invariant representations in the data domain. To enable
disentanglement, a partially-labelled representative set is used. By placing
the representations into the data domain, the changes made by the model are
easily examinable by human auditors. We show the effectiveness of our method on
both image and tabular datasets: Coloured MNIST, the CelebA and the Adult
dataset.
Related papers
- ALVIN: Active Learning Via INterpolation [44.410677121415695]
Active Learning Via INterpolation (ALVIN) conducts intra-class generalizations between examples from under-represented and well-represented groups.
ALVIN identifies informative examples exposing the model to regions of the representation space that counteract the influence of shortcuts.
Experimental results on six datasets encompassing sentiment analysis, natural language inference, and paraphrase detection demonstrate that ALVIN outperforms state-of-the-art active learning methods.
arXiv Detail & Related papers (2024-10-11T16:44:39Z) - Efficient Information Extraction in Few-Shot Relation Classification through Contrastive Representation Learning [23.992247765851204]
We introduce a novel approach to enhance information extraction combining multiple sentence representations and contrastive learning.
Our method employs contrastive learning to extract complementary discriminative information from these individual representations.
arXiv Detail & Related papers (2024-03-25T08:36:06Z) - Capturing Perspectives of Crowdsourced Annotators in Subjective Learning Tasks [9.110872603799839]
Supervised classification heavily depends on datasets annotated by humans.
In subjective tasks such as toxicity classification, these annotations often exhibit low agreement among raters.
In this work, we propose textbfAnnotator Awares for Texts (AART) for subjective classification tasks.
arXiv Detail & Related papers (2023-11-16T10:18:32Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - Active Learning by Feature Mixing [52.16150629234465]
We propose a novel method for batch active learning called ALFA-Mix.
We identify unlabelled instances with sufficiently-distinct features by seeking inconsistencies in predictions.
We show that inconsistencies in these predictions help discovering features that the model is unable to recognise in the unlabelled instances.
arXiv Detail & Related papers (2022-03-14T12:20:54Z) - Information Symmetry Matters: A Modal-Alternating Propagation Network
for Few-Shot Learning [118.45388912229494]
We propose a Modal-Alternating Propagation Network (MAP-Net) to supplement the absent semantic information of unlabeled samples.
We design a Relation Guidance (RG) strategy to guide the visual relation vectors via semantics so that the propagated information is more beneficial.
Our proposed method achieves promising performance and outperforms the state-of-the-art approaches.
arXiv Detail & Related papers (2021-09-03T03:43:53Z) - Neighborhood Contrastive Learning for Novel Class Discovery [79.14767688903028]
We build a new framework, named Neighborhood Contrastive Learning, to learn discriminative representations that are important to clustering performance.
We experimentally demonstrate that these two ingredients significantly contribute to clustering performance and lead our model to outperform state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-06-20T17:34:55Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z) - Does the dataset meet your expectations? Explaining sample
representation in image data [0.0]
A neural network model is affected adversely by a lack of diversity in training data.
We present a method that identifies and explains such deficiencies.
We then apply the method to examine a dataset of geometric shapes.
arXiv Detail & Related papers (2020-12-06T18:16:28Z) - Null It Out: Guarding Protected Attributes by Iterative Nullspace
Projection [51.041763676948705]
Iterative Null-space Projection (INLP) is a novel method for removing information from neural representations.
We show that our method is able to mitigate bias in word embeddings, as well as to increase fairness in a setting of multi-class classification.
arXiv Detail & Related papers (2020-04-16T14:02:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.