Projection-wise Disentangling for Fair and Interpretable Representation
Learning: Application to 3D Facial Shape Analysis
- URL: http://arxiv.org/abs/2106.13734v2
- Date: Mon, 28 Jun 2021 19:24:07 GMT
- Title: Projection-wise Disentangling for Fair and Interpretable Representation
Learning: Application to 3D Facial Shape Analysis
- Authors: Xianjing Liu, Bo Li, Esther Bron, Wiro Niessen, Eppo Wolvius and
Gennady Roshchupkin
- Abstract summary: Confounding bias is a crucial problem when applying machine learning to practice, especially in clinical practice.
We consider the problem of learning representations independent to multiple biases.
We propose to mitigate the bias while keeping almost all information in the latent representations, which enables us to observe and interpret them as well.
- Score: 4.716274324450199
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Confounding bias is a crucial problem when applying machine learning to
practice, especially in clinical practice. We consider the problem of learning
representations independent to multiple biases. In literature, this is mostly
solved by purging the bias information from learned representations. We however
expect this strategy to harm the diversity of information in the
representation, and thus limiting its prospective usage (e.g., interpretation).
Therefore, we propose to mitigate the bias while keeping almost all information
in the latent representations, which enables us to observe and interpret them
as well. To achieve this, we project latent features onto a learned vector
direction, and enforce the independence between biases and projected features
rather than all learned features. To interpret the mapping between projected
features and input data, we propose projection-wise disentangling: a sampling
and reconstruction along the learned vector direction. The proposed method was
evaluated on the analysis of 3D facial shape and patient characteristics
(N=5011). Experiments showed that this conceptually simple method achieved
state-of-the-art fair prediction performance and interpretability, showing its
great potential for clinical applications.
Related papers
- Supervised Contrastive Learning for Affect Modelling [2.570570340104555]
We introduce three different supervised contrastive learning approaches for training representations that consider affect information.
Results demonstrate the representation capacity of contrastive learning and its efficiency in boosting the accuracy of affect models.
arXiv Detail & Related papers (2022-08-25T17:40:19Z) - Fair Interpretable Representation Learning with Correction Vectors [60.0806628713968]
We propose a new framework for fair representation learning that is centered around the learning of "correction vectors"
We show experimentally that several fair representation learning models constrained in such a way do not exhibit losses in ranking or classification performance.
arXiv Detail & Related papers (2022-02-07T11:19:23Z) - Fair Interpretable Learning via Correction Vectors [68.29997072804537]
We propose a new framework for fair representation learning centered around the learning of "correction vectors"
The corrections are then simply summed up to the original features, and can therefore be analyzed as an explicit penalty or bonus to each feature.
We show experimentally that a fair representation learning problem constrained in such a way does not impact performance.
arXiv Detail & Related papers (2022-01-17T10:59:33Z) - Desiderata for Representation Learning: A Causal Perspective [104.3711759578494]
We take a causal perspective on representation learning, formalizing non-spuriousness and efficiency (in supervised representation learning) and disentanglement (in unsupervised representation learning)
This yields computable metrics that can be used to assess the degree to which representations satisfy the desiderata of interest and learn non-spurious and disentangled representations from single observational datasets.
arXiv Detail & Related papers (2021-09-08T17:33:54Z) - Visual Recognition with Deep Learning from Biased Image Datasets [6.10183951877597]
We show how biasing models can be applied to remedy problems in the context of visual recognition.
Based on the (approximate) knowledge of the biasing mechanisms at work, our approach consists in reweighting the observations.
We propose to use a low dimensional image representation, shared across the image databases.
arXiv Detail & Related papers (2021-09-06T10:56:58Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Interpretable Representations in Explainable AI: From Theory to Practice [7.031336702345381]
Interpretable representations are the backbone of many explainers that target black-box predictive systems.
We study properties of interpretable representations that encode presence and absence of human-comprehensible concepts.
arXiv Detail & Related papers (2020-08-16T21:44:03Z) - Explaining Black Box Predictions and Unveiling Data Artifacts through
Influence Functions [55.660255727031725]
Influence functions explain the decisions of a model by identifying influential training examples.
We conduct a comparison between influence functions and common word-saliency methods on representative tasks.
We develop a new measure based on influence functions that can reveal artifacts in training data.
arXiv Detail & Related papers (2020-05-14T00:45:23Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.