Translational Lung Imaging Analysis Through Disentangled Representations
- URL: http://arxiv.org/abs/2203.01668v1
- Date: Thu, 3 Mar 2022 11:56:20 GMT
- Title: Translational Lung Imaging Analysis Through Disentangled Representations
- Authors: Pedro M. Gordaliza, Juan Jos\'e Vaquero, Arrate Mu\~noz-Barrutia
- Abstract summary: We present a model capable of extracting disentangled information from images of different animal models and the mechanisms that generate the images.
It is optimized from images of pathological lung infected by Tuberculosis and is able: a) from an input slice, infer its position in a volume, the animal model to which it belongs, the damage present and even more, generate a mask covering the whole lung.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of new treatments often requires clinical trials with
translational animal models using (pre)-clinical imaging to characterize
inter-species pathological processes. Deep Learning (DL) models are commonly
used to automate retrieving relevant information from the images. Nevertheless,
they typically suffer from low generability and explainability as a product of
their entangled design, resulting in a specific DL model per animal model.
Consequently, it is not possible to take advantage of the high capacity of DL
to discover statistical relationships from inter-species images.
To alleviate this problem, in this work, we present a model capable of
extracting disentangled information from images of different animal models and
the mechanisms that generate the images. Our method is located at the
intersection between deep generative models, disentanglement and causal
representation learning. It is optimized from images of pathological lung
infected by Tuberculosis and is able: a) from an input slice, infer its
position in a volume, the animal model to which it belongs, the damage present
and even more, generate a mask covering the whole lung (similar overlap
measures to the nnU-Net), b) generate realistic lung images by setting the
above variables and c) generate counterfactual images, namely, healthy versions
of a damaged input slice.
Related papers
- Deformation-Recovery Diffusion Model (DRDM): Instance Deformation for Image Manipulation and Synthesis [13.629617915974531]
Deformation-Recovery Diffusion Model (DRDM) is a diffusion-based generative model based on deformation diffusion and recovery.
DRDM is trained to learn to recover unreasonable deformation components, thereby restoring each randomly deformed image to a realistic distribution.
Experimental results in cardiac MRI and pulmonary CT show DRDM is capable of creating diverse, large (over 10% image size deformation scale) deformations.
arXiv Detail & Related papers (2024-07-10T01:26:48Z) - DEEM: Diffusion Models Serve as the Eyes of Large Language Models for Image Perception [66.88792390480343]
We propose DEEM, a simple and effective approach that utilizes the generative feedback of diffusion models to align the semantic distributions of the image encoder.
DEEM exhibits enhanced robustness and a superior capacity to alleviate hallucinations while utilizing fewer trainable parameters, less pre-training data, and a smaller base model size.
arXiv Detail & Related papers (2024-05-24T05:46:04Z) - Multi-Branch Generative Models for Multichannel Imaging with an Application to PET/CT Joint Reconstruction [42.95604565673447]
This paper presents a proof-of-concept for learned synergistic reconstruction of medical images using multi-branch generative models.
We demonstrate the efficacy of our approach on both Modified National Institute of Standards and Technology (MNIST) and positron emission tomography (PET)/computed tomography (CT) datasets.
Despite challenges such as patch decomposition and model limitations, our results underscore the potential of generative models for enhancing medical imaging reconstruction.
arXiv Detail & Related papers (2024-04-12T18:21:08Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - Evaluation of pseudo-healthy image reconstruction for anomaly detection
with deep generative models: Application to brain FDG PET [3.5250480324981406]
We propose an evaluation procedure based on the simulation of realistic abnormal images to validate pseudo-healthy reconstruction methods.
We apply this framework to the reconstruction of 3D brain FDG PET using a convolutional variational autoencoder.
arXiv Detail & Related papers (2024-01-29T18:02:22Z) - Exploring the Robustness of Human Parsers Towards Common Corruptions [99.89886010550836]
We construct three corruption robustness benchmarks, termed LIP-C, ATR-C, and Pascal-Person-Part-C, to assist us in evaluating the risk tolerance of human parsing models.
Inspired by the data augmentation strategy, we propose a novel heterogeneous augmentation-enhanced mechanism to bolster robustness under commonly corrupted conditions.
arXiv Detail & Related papers (2023-09-02T13:32:14Z) - Diffusion Models for Counterfactual Generation and Anomaly Detection in
Brain Images [59.85702949046042]
We present a weakly supervised method to generate a healthy version of a diseased image and then use it to obtain a pixel-wise anomaly map.
We employ a diffusion model trained on healthy samples and combine Denoising Diffusion Probabilistic Model (DDPM) and Denoising Implicit Model (DDIM) at each step of the sampling process.
We verify that when our method is applied to healthy samples, the input images are reconstructed without significant modifications.
arXiv Detail & Related papers (2023-08-03T21:56:50Z) - On Sensitivity and Robustness of Normalization Schemes to Input
Distribution Shifts in Automatic MR Image Diagnosis [58.634791552376235]
Deep Learning (DL) models have achieved state-of-the-art performance in diagnosing multiple diseases using reconstructed images as input.
DL models are sensitive to varying artifacts as it leads to changes in the input data distribution between the training and testing phases.
We propose to use other normalization techniques, such as Group Normalization and Layer Normalization, to inject robustness into model performance against varying image artifacts.
arXiv Detail & Related papers (2023-06-23T03:09:03Z) - Fast Unsupervised Brain Anomaly Detection and Segmentation with
Diffusion Models [1.6352599467675781]
We propose a method based on diffusion models to detect and segment anomalies in brain imaging.
Our diffusion models achieve competitive performance compared with autoregressive approaches across a series of experiments with 2D CT and MRI data.
arXiv Detail & Related papers (2022-06-07T17:30:43Z) - Explainable multiple abnormality classification of chest CT volumes with
AxialNet and HiResCAM [89.2175350956813]
We introduce the challenging new task of explainable multiple abnormality classification in volumetric medical images.
We propose a multiple instance learning convolutional neural network, AxialNet, that allows identification of top slices for each abnormality.
We then aim to improve the model's learning through a novel mask loss that leverages HiResCAM and 3D allowed regions.
arXiv Detail & Related papers (2021-11-24T01:14:33Z) - RADIOGAN: Deep Convolutional Conditional Generative adversarial Network
To Generate PET Images [3.947298454012977]
We propose a deep convolutional conditional generative adversarial network to generate MIP positron emission tomography image (PET)
The advantage of our proposed method consists of one model that is capable of generating different classes of lesions trained on a small sample size for each class of lesion.
In addition, we show that a walk through a latent space can be used as a tool to evaluate the images generated.
arXiv Detail & Related papers (2020-03-19T10:14:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.