CAiD: Context-Aware Instance Discrimination for Self-supervised Learning
in Medical Imaging
- URL: http://arxiv.org/abs/2204.07344v1
- Date: Fri, 15 Apr 2022 06:45:10 GMT
- Title: CAiD: Context-Aware Instance Discrimination for Self-supervised Learning
in Medical Imaging
- Authors: Mohammad Reza Hosseinzadeh Taher, Fatemeh Haghighi, Michael B. Gotway,
Jianming Liang
- Abstract summary: Context-Aware instance Discrimination (CAiD) aims to improve instance discrimination learning in medical images.
CAiD provides finer and more discriminative information encoded from a diverse local context.
As open science, all codes and pre-trained models are available on our GitHub page.
- Score: 7.137224324997715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, self-supervised instance discrimination methods have achieved
significant success in learning visual representations from unlabeled
photographic images. However, given the marked differences between photographic
and medical images, the efficacy of instance-based objectives, focusing on
learning the most discriminative global features in the image (i.e., wheels in
bicycle), remains unknown in medical imaging. Our preliminary analysis showed
that high global similarity of medical images in terms of anatomy hampers
instance discrimination methods for capturing a set of distinct features,
negatively impacting their performance on medical downstream tasks. To
alleviate this limitation, we have developed a simple yet effective
self-supervised framework, called Context-Aware instance Discrimination (CAiD).
CAiD aims to improve instance discrimination learning by providing finer and
more discriminative information encoded from a diverse local context of
unlabeled medical images. We conduct a systematic analysis to investigate the
utility of the learned features from a three-pronged perspective: (i)
generalizability and transferability, (ii) separability in the embedding space,
and (iii) reusability. Our extensive experiments demonstrate that CAiD (1)
enriches representations learned from existing instance discrimination methods;
(2) delivers more discriminative features by adequately capturing finer
contextual information from individual medial images; and (3) improves
reusability of low/mid-level features compared to standard instance
discriminative methods. As open science, all codes and pre-trained models are
available on our GitHub page: https://github.com/JLiangLab/CAiD.
Related papers
- Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Hierarchical discriminative learning improves visual representations of
biomedical microscopy [35.521563469534264]
HiDisc is a data-driven method that implicitly learns features of the underlying cancer diagnosis.
HiDisc pretraining outperforms current state-of-the-art self-supervised pretraining methods for cancer diagnosis and genetic mutation prediction.
arXiv Detail & Related papers (2023-03-02T22:04:42Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - On Fairness of Medical Image Classification with Multiple Sensitive
Attributes via Learning Orthogonal Representations [29.703978958553247]
We propose a novel method for fair representation learning with respect to multi-sensitive attributes.
The effectiveness of the proposed method is demonstrated with extensive experiments on the CheXpert dataset.
arXiv Detail & Related papers (2023-01-04T08:11:11Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - DiRA: Discriminative, Restorative, and Adversarial Learning for
Self-supervised Medical Image Analysis [7.137224324997715]
DiRA is a framework that unites discriminative, restorative, and adversarial learning.
It gleans complementary visual information from unlabeled medical images for semantic representation learning.
arXiv Detail & Related papers (2022-04-21T23:52:52Z) - Towards better understanding and better generalization of few-shot
classification in histology images with contrastive learning [7.620702640026243]
Few-shot learning is an established topic in natural images for years, but few work is attended to histology images.
We propose to incorporate contrastive learning (CL) with latent augmentation (LA) to build a few-shot system.
In experiments, we find i) models learned by CL generalize better than supervised learning for histology images in unseen classes, and ii) LA brings consistent gains over baselines.
arXiv Detail & Related papers (2022-02-18T07:48:34Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.