Unified Representation Learning for Efficient Medical Image Analysis
- URL: http://arxiv.org/abs/2006.11223v2
- Date: Tue, 8 Jun 2021 00:04:34 GMT
- Title: Unified Representation Learning for Efficient Medical Image Analysis
- Authors: Ghada Zamzmi, Sivaramakrishnan Rajaraman, Sameer Antani
- Abstract summary: We propose a multi-task training approach for medical image analysis using a unified modality-specific feature representation (UMS-Rep)
Our results demonstrate that the proposed approach reduces the overall demand for computational resources and improves target task generalization and performance.
- Score: 0.623075162128532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Medical image analysis typically includes several tasks such as enhancement,
segmentation, and classification. Traditionally, these tasks are implemented
using separate deep learning models for separate tasks, which is not efficient
because it involves unnecessary training repetitions, demands greater
computational resources, and requires a relatively large amount of labeled
data. In this paper, we propose a multi-task training approach for medical
image analysis, where individual tasks are fine-tuned simultaneously through
relevant knowledge transfer using a unified modality-specific feature
representation (UMS-Rep). We explore different fine-tuning strategies to
demonstrate the impact of the strategy on the performance of target medical
image tasks. We experiment with different visual tasks (e.g., image denoising,
segmentation, and classification) to highlight the advantages offered with our
approach for two imaging modalities, chest X-ray and Doppler echocardiography.
Our results demonstrate that the proposed approach reduces the overall demand
for computational resources and improves target task generalization and
performance. Further, our results prove that the performance of target tasks in
medical images is highly influenced by the utilized fine-tuning strategy.
Related papers
- Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - Self-Supervised Learning for Medical Image Data with Anatomy-Oriented Imaging Planes [28.57933404578436]
We propose two complementary pretext tasks for medical image data.
The first is to learn the relative orientation between the imaging planes and implemented as regressing their intersecting lines.
The second exploits parallel imaging planes to regress their relative slice locations within a stack.
arXiv Detail & Related papers (2024-03-25T07:34:06Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation [1.815047691981538]
We propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS)
MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities.
Our experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance.
arXiv Detail & Related papers (2023-06-06T14:48:50Z) - Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Generalizable multi-task, multi-domain deep segmentation of sparse
pediatric imaging datasets via multi-scale contrastive regularization and
multi-joint anatomical priors [0.41998444721319217]
We propose to design a novel multi-task, multi-domain learning framework in which a single segmentation network is optimized over multiple datasets.
We evaluate our contributions for performing bone segmentation using three scarce and pediatric imaging datasets of the ankle, knee, and shoulder joints.
arXiv Detail & Related papers (2022-07-27T12:59:16Z) - Self-Supervised-RCNN for Medical Image Segmentation with Limited Data
Annotation [0.16490701092527607]
We propose an alternative deep learning training strategy based on self-supervised pretraining on unlabeled MRI scans.
Our pretraining approach first, randomly applies different distortions to random areas of unlabeled images and then predicts the type of distortions and loss of information.
The effectiveness of the proposed method for segmentation tasks in different pre-training and fine-tuning scenarios is evaluated.
arXiv Detail & Related papers (2022-07-17T13:28:52Z) - Suggestive Annotation of Brain MR Images with Gradient-guided Sampling [12.928940875474378]
We propose an efficient annotation framework for brain MR images that can suggest informative sample images for human experts to annotate.
We evaluate the framework on two different brain image analysis tasks, namely brain tumour segmentation and whole brain segmentation.
The proposed framework demonstrates a promising way to save manual annotation cost and improve data efficiency in medical imaging applications.
arXiv Detail & Related papers (2022-06-02T12:23:44Z) - Robust and Efficient Medical Imaging with Self-Supervision [80.62711706785834]
We present REMEDIS, a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI.
We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data.
arXiv Detail & Related papers (2022-05-19T17:34:18Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.