DiRA: Discriminative, Restorative, and Adversarial Learning for
Self-supervised Medical Image Analysis
- URL: http://arxiv.org/abs/2204.10437v1
- Date: Thu, 21 Apr 2022 23:52:52 GMT
- Title: DiRA: Discriminative, Restorative, and Adversarial Learning for
Self-supervised Medical Image Analysis
- Authors: Fatemeh Haghighi, Mohammad Reza Hosseinzadeh Taher, Michael B. Gotway,
Jianming Liang
- Abstract summary: DiRA is a framework that unites discriminative, restorative, and adversarial learning.
It gleans complementary visual information from unlabeled medical images for semantic representation learning.
- Score: 7.137224324997715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discriminative learning, restorative learning, and adversarial learning have
proven beneficial for self-supervised learning schemes in computer vision and
medical imaging. Existing efforts, however, omit their synergistic effects on
each other in a ternary setup, which, we envision, can significantly benefit
deep semantic representation learning. To realize this vision, we have
developed DiRA, the first framework that unites discriminative, restorative,
and adversarial learning in a unified manner to collaboratively glean
complementary visual information from unlabeled medical images for fine-grained
semantic representation learning. Our extensive experiments demonstrate that
DiRA (1) encourages collaborative learning among three learning ingredients,
resulting in more generalizable representation across organs, diseases, and
modalities; (2) outperforms fully supervised ImageNet models and increases
robustness in small data regimes, reducing annotation cost across multiple
medical imaging applications; (3) learns fine-grained semantic representation,
facilitating accurate lesion localization with only image-level annotation; and
(4) enhances state-of-the-art restorative approaches, revealing that DiRA is a
general mechanism for united representation learning. All code and pre-trained
models are available at https: //github.com/JLiangLab/DiRA.
Related papers
- Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - A Knowledge-based Learning Framework for Self-supervised Pre-training
Towards Enhanced Recognition of Medical Images [14.304996977665212]
This study proposes a knowledge-based learning framework towards enhanced recognition of medical images.
It works in three phases by synergizing contrastive learning and generative learning models.
The proposed framework statistically excels in self-supervised benchmarks, achieving 2.08, 1.23, 1.12, 0.76 and 1.38 percentage points improvements over SimCLR in AUC/Dice.
arXiv Detail & Related papers (2022-11-27T03:58:58Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - CAiD: Context-Aware Instance Discrimination for Self-supervised Learning
in Medical Imaging [7.137224324997715]
Context-Aware instance Discrimination (CAiD) aims to improve instance discrimination learning in medical images.
CAiD provides finer and more discriminative information encoded from a diverse local context.
As open science, all codes and pre-trained models are available on our GitHub page.
arXiv Detail & Related papers (2022-04-15T06:45:10Z) - Preservational Learning Improves Self-supervised Medical Image Models by
Reconstructing Diverse Contexts [58.53111240114021]
We present Preservational Contrastive Representation Learning (PCRL) for learning self-supervised medical representations.
PCRL provides very competitive results under the pretraining-finetuning protocol, outperforming both self-supervised and supervised counterparts in 5 classification/segmentation tasks substantially.
arXiv Detail & Related papers (2021-09-09T16:05:55Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.