Projective Latent Interventions for Understanding and Fine-tuning
Classifiers
- URL: http://arxiv.org/abs/2006.12902v2
- Date: Tue, 25 Aug 2020 22:10:22 GMT
- Title: Projective Latent Interventions for Understanding and Fine-tuning
Classifiers
- Authors: Andreas Hinterreiter and Marc Streit and Bernhard Kainz
- Abstract summary: We present Projective Latent Interventions (PLIs), a technique for retraining classifiers by back-propagating manual changes made to low-dimensional embeddings of the latent space.
PLIs allow domain experts to control the latent decision space in an intuitive way in order to better match their expectations.
We evaluate our technique on a real-world scenario in fetal ultrasound imaging.
- Score: 5.539383380453129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-dimensional latent representations learned by neural network classifiers
are notoriously hard to interpret. Especially in medical applications, model
developers and domain experts desire a better understanding of how these latent
representations relate to the resulting classification performance. We present
Projective Latent Interventions (PLIs), a technique for retraining classifiers
by back-propagating manual changes made to low-dimensional embeddings of the
latent space. The back-propagation is based on parametric approximations of
t-distributed stochastic neighbourhood embeddings. PLIs allow domain experts to
control the latent decision space in an intuitive way in order to better match
their expectations. For instance, the performance for specific pairs of classes
can be enhanced by manually separating the class clusters in the embedding. We
evaluate our technique on a real-world scenario in fetal ultrasound imaging.
Related papers
- Manifold Contrastive Learning with Variational Lie Group Operators [5.0741409008225755]
We propose a contrastive learning approach that directly models the latent manifold using Lie group operators parameterized by coefficients with a sparsity-promoting prior.
A variational distribution over these coefficients provides a generative model of the manifold, with samples which provide feature augmentations applicable both during contrastive training and downstream tasks.
arXiv Detail & Related papers (2023-06-23T15:07:01Z) - TAX: Tendency-and-Assignment Explainer for Semantic Segmentation with
Multi-Annotators [31.36818611460614]
Tendency-and-Assignment Explainer (TAX) is designed to offer interpretability at the annotator and assignment levels.
We show that our TAX can be applied to state-of-the-art network architectures with comparable performances.
arXiv Detail & Related papers (2023-02-19T12:40:22Z) - Learning disentangled representations for explainable chest X-ray
classification using Dirichlet VAEs [68.73427163074015]
This study explores the use of the Dirichlet Variational Autoencoder (DirVAE) for learning disentangled latent representations of chest X-ray (CXR) images.
The predictive capacity of multi-modal latent representations learned by DirVAE models is investigated through implementation of an auxiliary multi-label classification task.
arXiv Detail & Related papers (2023-02-06T18:10:08Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Improving Deep Learning Interpretability by Saliency Guided Training [36.782919916001624]
Saliency methods have been widely used to highlight important input features in model predictions.
Most existing methods use backpropagation on a modified gradient function to generate saliency maps.
We introduce a saliency guided training procedure for neural networks to reduce noisy gradients used in predictions.
arXiv Detail & Related papers (2021-11-29T06:05:23Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Improving Music Performance Assessment with Contrastive Learning [78.8942067357231]
This study investigates contrastive learning as a potential method to improve existing MPA systems.
We introduce a weighted contrastive loss suitable for regression tasks applied to a convolutional neural network.
Our results show that contrastive-based methods are able to match and exceed SoTA performance for MPA regression tasks.
arXiv Detail & Related papers (2021-08-03T19:24:25Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.