Controllable cardiac synthesis via disentangled anatomy arithmetic
- URL: http://arxiv.org/abs/2107.01748v1
- Date: Sun, 4 Jul 2021 23:13:33 GMT
- Title: Controllable cardiac synthesis via disentangled anatomy arithmetic
- Authors: Spyridon Thermos, Xiao Liu, Alison O'Neil, Sotirios A. Tsaftaris
- Abstract summary: We propose a framework termed "disentangled anatomy arithmetic"
A generative model learns to combine anatomical factors of different input images with the desired imaging modality.
Our model is used to generate realistic images, pathology labels, and segmentation masks.
- Score: 15.351113774542839
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Acquiring annotated data at scale with rare diseases or conditions remains a
challenge. It would be extremely useful to have a method that controllably
synthesizes images that can correct such underrepresentation. Assuming a proper
latent representation, the idea of a "latent vector arithmetic" could offer the
means of achieving such synthesis. A proper representation must encode the
fidelity of the input data, preserve invariance and equivariance, and permit
arithmetic operations. Motivated by the ability to disentangle images into
spatial anatomy (tensor) factors and accompanying imaging (vector)
representations, we propose a framework termed "disentangled anatomy
arithmetic", in which a generative model learns to combine anatomical factors
of different input images such that when they are re-entangled with the desired
imaging modality (e.g. MRI), plausible new cardiac images are created with the
target characteristics. To encourage a realistic combination of anatomy factors
after the arithmetic step, we propose a localized noise injection network that
precedes the generator. Our model is used to generate realistic images,
pathology labels, and segmentation masks that are used to augment the existing
datasets and subsequently improve post-hoc classification and segmentation
tasks. Code is publicly available at https://github.com/vios-s/DAA-GAN.
Related papers
- COIN: Counterfactual inpainting for weakly supervised semantic segmentation for medical images [3.5418498524791766]
This research is development of a novel counterfactual inpainting approach (COIN)
COIN flips the predicted classification label from abnormal to normal by using a generative model.
The effectiveness of the method is demonstrated by segmenting synthetic targets and actual kidney tumors from CT images acquired from Tartu University Hospital in Estonia.
arXiv Detail & Related papers (2024-04-19T12:09:49Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Anatomy-aware and acquisition-agnostic joint registration with SynthMorph [6.017634371712142]
Affine image registration is a cornerstone of medical image analysis.
Deep-learning (DL) methods learn a function that maps an image pair to an output transform.
Most affine methods are agnostic to the anatomy the user wishes to align, meaning the registration will be inaccurate if algorithms consider all structures in the image.
We address these shortcomings with SynthMorph, a fast, symmetric, diffeomorphic, and easy-to-use DL tool for joint affine-deformable registration of any brain image.
arXiv Detail & Related papers (2023-01-26T18:59:33Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - A Shared Representation for Photorealistic Driving Simulators [83.5985178314263]
We propose to improve the quality of generated images by rethinking the discriminator architecture.
The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses.
We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning.
arXiv Detail & Related papers (2021-12-09T18:59:21Z) - Mutually improved endoscopic image synthesis and landmark detection in
unpaired image-to-image translation [0.9322743017642274]
The CycleGAN framework allows for unsupervised image-to-image translation of unpaired data.
In a scenario of surgical training on a physical surgical simulator, this method can be used to transform endoscopic images of phantoms into images which more closely resemble the intra-operative appearance of the same surgical target structure.
We show that a task defined on sparse landmark labels improves consistency of synthesis by the generator network in both domains.
arXiv Detail & Related papers (2021-07-14T19:09:50Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - Uncertainty Quantification using Variational Inference for Biomedical Image Segmentation [0.0]
We use an encoder decoder architecture based on variational inference techniques for segmenting brain tumour images.
We evaluate our work on the publicly available BRATS dataset using Dice Similarity Coefficient (DSC) and Intersection Over Union (IOU) as the evaluation metrics.
arXiv Detail & Related papers (2020-08-12T20:08:04Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.