Quantitative Imaging Principles Improves Medical Image Learning
- URL: http://arxiv.org/abs/2206.06663v1
- Date: Tue, 14 Jun 2022 07:51:49 GMT
- Title: Quantitative Imaging Principles Improves Medical Image Learning
- Authors: Lambert T. Leong, Michael C. Wong, Yannik Glaser, Thomas Wolfgruber,
Steven B. Heymsfield, Peter Sadwoski, John A. Shepherd
- Abstract summary: We propose incorporating quantitative imaging principles during generative SSL to improve image quality and quantitative biological accuracy.
Our model also generates images that validate on clinical quantitative analysis software.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Fundamental differences between natural and medical images have recently
favored the use of self-supervised learning (SSL) over ImageNet transfer
learning for medical image applications. Differences between image types are
primarily due to the imaging modality and medical images utilize a wide range
of physics based techniques while natural images are captured using only
visible light. While many have demonstrated that SSL on medical images has
resulted in better downstream task performance, our work suggests that more
performance can be gained. The scientific principles which are used to acquire
medical images are not often considered when constructing learning problems.
For this reason, we propose incorporating quantitative imaging principles
during generative SSL to improve image quality and quantitative biological
accuracy. We show that this training schema results in better starting states
for downstream supervised training on limited data. Our model also generates
images that validate on clinical quantitative analysis software.
Related papers
- Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - BiomedJourney: Counterfactual Biomedical Image Generation by
Instruction-Learning from Multimodal Patient Journeys [99.7082441544384]
We present BiomedJourney, a novel method for counterfactual biomedical image generation by instruction-learning.
We use GPT-4 to process the corresponding imaging reports and generate a natural language description of disease progression.
The resulting triples are then used to train a latent diffusion model for counterfactual biomedical image generation.
arXiv Detail & Related papers (2023-10-16T18:59:31Z) - Enhancing Network Initialization for Medical AI Models Using
Large-Scale, Unlabeled Natural Images [1.883452979588382]
Self-supervised learning (SSL) can be applied to chest radiographs to learn robust features.
We tested our approach on over 800,000 chest radiographs from six large global datasets.
arXiv Detail & Related papers (2023-08-15T10:37:13Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Revisiting Hidden Representations in Transfer Learning for Medical
Imaging [2.4545492329339815]
We compare ImageNet and RadImageNet on seven medical classification tasks.
Our results indicate that, contrary to intuition, ImageNet and RadImageNet may converge to distinct intermediate representations.
Our findings show that the similarity between networks before and after fine-tuning does not correlate with performance gains.
arXiv Detail & Related papers (2023-02-16T13:04:59Z) - Learning to Exploit Temporal Structure for Biomedical Vision-Language
Processing [53.89917396428747]
Self-supervised learning in vision-language processing exploits semantic alignment between imaging and text modalities.
We explicitly account for prior images and reports when available during both training and fine-tuning.
Our approach, named BioViL-T, uses a CNN-Transformer hybrid multi-image encoder trained jointly with a text model.
arXiv Detail & Related papers (2023-01-11T16:35:33Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - A Systematic Benchmarking Analysis of Transfer Learning for Medical
Image Analysis [7.339428207644444]
We conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset.
We present a practical approach to bridge the domain gap between natural and medical images by continually (pre-training) supervised ImageNet models on medical images.
arXiv Detail & Related papers (2021-08-12T19:08:34Z) - Supervised Transfer Learning at Scale for Medical Imaging [8.341246672632582]
We investigate whether modern methods can change the fortune of transfer learning for medical imaging.
We study the class of large-scale pre-trained networks presented by Kolesnikov et al. on three diverse imaging tasks.
We find that for some of these properties transfer from natural to medical images is indeed extremely effective, but only when performed at sufficient scale.
arXiv Detail & Related papers (2021-01-14T23:55:49Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Contrastive Learning of Medical Visual Representations from Paired
Images and Text [38.91117443316013]
We propose ConVIRT, an unsupervised strategy to learn medical visual representations by exploiting naturally occurring descriptive paired text.
Our new method of pretraining medical image encoders with the paired text data via a bidirectional contrastive objective between the two modalities is domain-agnostic, and requires no additional expert input.
arXiv Detail & Related papers (2020-10-02T02:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.