ICAM: Interpretable Classification via Disentangled Representations and
Feature Attribution Mapping
- URL: http://arxiv.org/abs/2006.08287v2
- Date: Tue, 16 Jun 2020 11:40:31 GMT
- Title: ICAM: Interpretable Classification via Disentangled Representations and
Feature Attribution Mapping
- Authors: Cher Bass, Mariana da Silva, Carole Sudre, Petru-Daniel Tudosiu,
Stephen M. Smith, Emma C. Robinson
- Abstract summary: We present a novel framework for creating class specific FA maps through image-to-image translation.
We validate our method on 2D and 3D brain image datasets of dementia, ageing, and (simulated) lesion detection.
Our approach is the first to use latent space sampling to support exploration of phenotype variation.
- Score: 3.262230127283453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature attribution (FA), or the assignment of class-relevance to different
locations in an image, is important for many classification problems but is
particularly crucial within the neuroscience domain, where accurate mechanistic
models of behaviours, or disease, require knowledge of all features
discriminative of a trait. At the same time, predicting class relevance from
brain images is challenging as phenotypes are typically heterogeneous, and
changes occur against a background of significant natural variation. Here, we
present a novel framework for creating class specific FA maps through
image-to-image translation. We propose the use of a VAE-GAN to explicitly
disentangle class relevance from background features for improved
interpretability properties, which results in meaningful FA maps. We validate
our method on 2D and 3D brain image datasets of dementia (ADNI dataset), ageing
(UK Biobank), and (simulated) lesion detection. We show that FA maps generated
by our method outperform baseline FA methods when validated against ground
truth. More significantly, our approach is the first to use latent space
sampling to support exploration of phenotype variation. Our code will be
available online at https://github.com/CherBass/ICAM.
Related papers
- Generalizing to Unseen Domains in Diabetic Retinopathy with Disentangled Representations [32.7667209371645]
Existing models experience notable performance degradation on unseen domains due to domain shifts.
We propose a novel framework where representations of paired data from different domains are decoupled into semantic features and domain noise.
The resulting augmented representation comprises original retinal semantics and domain noise from other domains, aiming to generate enhanced representations aligned with real-world clinical needs.
arXiv Detail & Related papers (2024-06-10T15:43:56Z) - Source-Free Domain Adaptation of Weakly-Supervised Object Localization Models for Histology [8.984366988153116]
Deep weakly supervised object localization (WSOL) models can be trained to classify histology images according to cancer grade.
A WSOL model initially trained on some labeled source image data can be adapted using unlabeled target data.
In this paper, we focus on source-free (unsupervised) domain adaptation (SFDA), a challenging problem where a pre-trained source model is adapted to a new target domain.
arXiv Detail & Related papers (2024-04-29T21:25:59Z) - Enhancing AI Diagnostics: Autonomous Lesion Masking via Semi-Supervised Deep Learning [1.4053129774629076]
This study presents an unsupervised domain adaptation method aimed at autonomously generating image masks outlining regions of interest (ROIs) for differentiating breast lesions in breast ultrasound (US) imaging.
Our semi-supervised learning approach utilizes a primitive model trained on a small public breast US dataset with true annotations.
This model is then iteratively refined for the domain adaptation task, generating pseudo-masks for our private, unannotated breast US dataset.
arXiv Detail & Related papers (2024-04-18T18:25:00Z) - PhenDiff: Revealing Subtle Phenotypes with Diffusion Models in Real Images [0.7329200485567825]
PhenDiff identifies shifts in cellular phenotypes by translating a real image from one condition to another.
We qualitatively and quantitatively validate this method on cases where the phenotypic changes are visible or invisible, such as in low concentrations of drug treatments.
arXiv Detail & Related papers (2023-12-13T17:06:33Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Few-Shot Meta Learning for Recognizing Facial Phenotypes of Genetic
Disorders [55.41644538483948]
Automated classification and similarity retrieval aid physicians in decision-making to diagnose possible genetic conditions as early as possible.
Previous work has addressed the problem as a classification problem and used deep learning methods.
In this study, we used a facial recognition model trained on a large corpus of healthy individuals as a pre-task and transferred it to facial phenotype recognition.
arXiv Detail & Related papers (2022-10-23T11:52:57Z) - Domain Invariant Model with Graph Convolutional Network for Mammogram
Classification [49.691629817104925]
We propose a novel framework, namely Domain Invariant Model with Graph Convolutional Network (DIM-GCN)
We first propose a Bayesian network, which explicitly decomposes the latent variables into disease-related and other disease-irrelevant parts that are provable to be disentangled from each other.
To better capture the macroscopic features, we leverage the observed clinical attributes as a goal for reconstruction, via Graph Convolutional Network (GCN)
arXiv Detail & Related papers (2022-04-21T08:23:44Z) - Self-Supervised Vision Transformers Learn Visual Concepts in
Histopathology [5.164102666113966]
We conduct a search for good representations in pathology by training a variety of self-supervised models with validation on a variety of weakly-supervised and patch-level tasks.
Our key finding is in discovering that Vision Transformers using DINO-based knowledge distillation are able to learn data-efficient and interpretable features in histology images.
arXiv Detail & Related papers (2022-03-01T16:14:41Z) - A-FMI: Learning Attributions from Deep Networks via Feature Map
Importance [58.708607977437794]
Gradient-based attribution methods can aid in the understanding of convolutional neural networks (CNNs)
The redundancy of attribution features and the gradient saturation problem are challenges that attribution methods still face.
We propose a new concept, feature map importance (FMI), to refine the contribution of each feature map, and a novel attribution method via FMI, to address the gradient saturation problem.
arXiv Detail & Related papers (2021-04-12T14:54:44Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.