Disentangling A Single MR Modality
- URL: http://arxiv.org/abs/2205.04982v1
- Date: Tue, 10 May 2022 15:40:12 GMT
- Title: Disentangling A Single MR Modality
- Authors: Lianrui Zuo, Yihao Liu, Yuan Xue, Shuo Han, Murat Bilgel, Susan M.
Resnick, Jerry L. Prince, Aaron Carass
- Abstract summary: We present a novel framework that learns theoretically and practically superior disentanglement from single modality magnetic resonance images.
We propose a new information-based metric to quantitatively evaluate disentanglement.
- Score: 15.801648254480487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disentangling anatomical and contrast information from medical images has
gained attention recently, demonstrating benefits for various image analysis
tasks. Current methods learn disentangled representations using either paired
multi-modal images with the same underlying anatomy or auxiliary labels (e.g.,
manual delineations) to provide inductive bias for disentanglement. However,
these requirements could significantly increase the time and cost in data
collection and limit the applicability of these methods when such data are not
available. Moreover, these methods generally do not guarantee disentanglement.
In this paper, we present a novel framework that learns theoretically and
practically superior disentanglement from single modality magnetic resonance
images. Moreover, we propose a new information-based metric to quantitatively
evaluate disentanglement. Comparisons over existing disentangling methods
demonstrate that the proposed method achieves superior performance in both
disentanglement and cross-domain image-to-image translation tasks.
Related papers
- Towards Classifying Histopathological Microscope Images as Time Series Data [2.6553713413568913]
We propose a novel approach to classifying microscopy images as time series data.<n>The proposed method fits image sequences of varying lengths to a fixed-length target by leveraging Dynamic Time-series Warping (DTW)<n>We demonstrate the effectiveness of our approach by comparing performance with various baselines and showcasing the benefits of using various inference strategies.
arXiv Detail & Related papers (2025-06-19T02:51:15Z) - Here Comes the Explanation: A Shapley Perspective on Multi-contrast Medical Image Segmentation [0.1675245825272646]
We propose using contrast-level Shapley values to explain state-of-the-art models trained on standard metrics used in brain tumor segmentation.
Our results demonstrate that Shapley analysis provides valuable insights into different models' behavior used for tumor segmentation.
arXiv Detail & Related papers (2025-04-06T23:52:07Z) - Siamese Networks with Soft Labels for Unsupervised Lesion Detection and
Patch Pretraining on Screening Mammograms [7.917505566910886]
We propose an alternative method that uses contralateral mammograms to train a neural network to encode similar embeddings.
Our method demonstrates superior performance in mammogram patch classification compared to existing self-supervised learning methods.
arXiv Detail & Related papers (2024-01-10T22:27:37Z) - Self-Supervised Learning for Image Super-Resolution and Deblurring [9.587978273085296]
Self-supervised methods have recently proved to be nearly as effective as supervised methods in various imaging inverse problems.
We propose a new self-supervised approach that leverages the fact that many image distributions are approximately scale-invariant.
We demonstrate throughout a series of experiments on real datasets that the proposed method outperforms other self-supervised approaches.
arXiv Detail & Related papers (2023-12-18T14:30:54Z) - Masked Images Are Counterfactual Samples for Robust Fine-tuning [77.82348472169335]
Fine-tuning deep learning models can lead to a trade-off between in-distribution (ID) performance and out-of-distribution (OOD) robustness.
We propose a novel fine-tuning method, which uses masked images as counterfactual samples that help improve the robustness of the fine-tuning model.
arXiv Detail & Related papers (2023-03-06T11:51:28Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Unpaired Image-to-Image Translation with Limited Data to Reveal Subtle
Phenotypes [0.5076419064097732]
We present an improved CycleGAN architecture that employs self-supervised discriminators to alleviate the need for numerous images.
We also provide results obtained with small biological datasets on obvious and non-obvious cell phenotype variations.
arXiv Detail & Related papers (2023-01-21T16:25:04Z) - PatchNR: Learning from Small Data by Patch Normalizing Flow
Regularization [57.37911115888587]
We introduce a regularizer for the variational modeling of inverse problems in imaging based on normalizing flows.
Our regularizer, called patchNR, involves a normalizing flow learned on patches of very few images.
arXiv Detail & Related papers (2022-05-24T12:14:26Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Coupled Feature Learning for Multimodal Medical Image Fusion [42.23662451234756]
Multimodal image fusion aims to combine relevant information from images acquired with different sensors.
In this paper, we propose a novel multimodal image fusion method based on coupled dictionary learning.
arXiv Detail & Related papers (2021-02-17T09:13:28Z) - Combining Similarity and Adversarial Learning to Generate Visual
Explanation: Application to Medical Image Classification [0.0]
We leverage a learning framework to produce our visual explanations method.
Using metrics from the literature, our method outperforms state-of-the-art approaches.
We validate our approach on a large chest X-ray database.
arXiv Detail & Related papers (2020-12-14T08:34:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.