GLOWin: A Flow-based Invertible Generative Framework for Learning
Disentangled Feature Representations in Medical Images
- URL: http://arxiv.org/abs/2103.10868v1
- Date: Fri, 19 Mar 2021 15:47:01 GMT
- Title: GLOWin: A Flow-based Invertible Generative Framework for Learning
Disentangled Feature Representations in Medical Images
- Authors: Aadhithya Sankar, Matthias Keicher, Rami Eisawy, Abhijeet Parida,
Franz Pfister, Seong Tae Kim, Nassir Navab
- Abstract summary: Flow-based generative models have been proposed to generate realistic images by directly modeling the data distribution with invertible functions.
We propose a new flow-based generative model framework, named GLOWin, that is end-to-end invertible and able to learn disentangled representations.
- Score: 40.58581577183134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disentangled representations can be useful in many downstream tasks, help to
make deep learning models more interpretable, and allow for control over
features of synthetically generated images that can be useful in training other
models that require a large number of labelled or unlabelled data. Recently,
flow-based generative models have been proposed to generate realistic images by
directly modeling the data distribution with invertible functions. In this
work, we propose a new flow-based generative model framework, named GLOWin,
that is end-to-end invertible and able to learn disentangled representations.
Feature disentanglement is achieved by factorizing the latent space into
components such that each component learns the representation for one
generative factor. Comprehensive experiments have been conducted to evaluate
the proposed method on a public brain tumor MR dataset. Quantitative and
qualitative results suggest that the proposed method is effective in
disentangling the features from complex medical images.
Related papers
- Mask-guided cross-image attention for zero-shot in-silico histopathologic image generation with a diffusion model [0.10910416614141322]
Diffusion models are the state-of-the-art solution for generating in-silico images.
Appearance transfer diffusion models are designed for natural images.
In computational pathology, specifically in oncology, it is not straightforward to define which objects in an image should be classified as foreground and background.
We contribute to the applicability of appearance transfer models to diffusion-stained images by modifying the appearance transfer guidance to alternate between class-specific AdaIN feature statistics matchings.
arXiv Detail & Related papers (2024-07-16T12:36:26Z) - Flow Factorized Representation Learning [109.51947536586677]
We introduce a generative model which specifies a distinct set of latent probability paths that define different input transformations.
We show that our model achieves higher likelihoods on standard representation learning benchmarks while simultaneously being closer to approximately equivariant models.
arXiv Detail & Related papers (2023-09-22T20:15:37Z) - R-Cut: Enhancing Explainability in Vision Transformers with Relationship
Weighted Out and Cut [14.382326829600283]
We introduce two modules: the Relationship Weighted Out" and the Cut" modules.
The Cut" module performs fine-grained feature decomposition, taking into account factors such as position, texture, and color.
We validate our method with extensive qualitative and quantitative experiments on the ImageNet dataset.
arXiv Detail & Related papers (2023-07-18T08:03:51Z) - Denoising Diffusion Probabilistic Models for Generation of Realistic
Fully-Annotated Microscopy Image Data Sets [1.07539359851877]
In this study, we demonstrate that diffusion models can effectively generate fully-annotated microscopy image data sets.
The proposed pipeline helps to reduce the reliance on manual annotations when training deep learning-based segmentation approaches.
arXiv Detail & Related papers (2023-01-02T14:17:08Z) - Learning stochastic object models from medical imaging measurements by
use of advanced AmbientGANs [7.987904193401004]
generative adversarial networks (GANs) hold potential for such tasks.
Deep generative neural networks, such as generative adversarial networks (GANs) hold potential for such tasks.
In this work, a modified AmbientGAN training strategy is proposed that is suitable for modern progressive or multi-resolution training approaches.
arXiv Detail & Related papers (2021-06-27T21:46:23Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Flow-based Generative Models for Learning Manifold to Manifold Mappings [39.60406116984869]
We introduce three kinds of invertible layers for manifold-valued data, which are analogous to their functionality in flow-based generative models.
We show promising results where we can reliably and accurately reconstruct brain images of a field of orientation distribution functions.
arXiv Detail & Related papers (2020-12-18T02:19:18Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - High-Fidelity Synthesis with Disentangled Representation [60.19657080953252]
We propose an Information-Distillation Generative Adrial Network (ID-GAN) for disentanglement learning and high-fidelity synthesis.
Our method learns disentangled representation using VAE-based models, and distills the learned representation with an additional nuisance variable to the separate GAN-based generator for high-fidelity synthesis.
Despite the simplicity, we show that the proposed method is highly effective, achieving comparable image generation quality to the state-of-the-art methods using the disentangled representation.
arXiv Detail & Related papers (2020-01-13T14:39:40Z) - Semi-Supervised Learning with Normalizing Flows [54.376602201489995]
FlowGMM is an end-to-end approach to generative semi supervised learning with normalizing flows.
We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data.
arXiv Detail & Related papers (2019-12-30T17:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.