GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis
- URL: http://arxiv.org/abs/2301.04410v1
- Date: Wed, 11 Jan 2023 11:38:37 GMT
- Title: GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis
- Authors: Hong-Yu Zhou, Chixiang Lu, Liansheng Wang, Yizhou Yu
- Abstract summary: We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
- Score: 52.04899592688968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised representation learning has been extremely successful in
medical image analysis, as it requires no human annotations to provide
transferable representations for downstream tasks. Recent self-supervised
learning methods are dominated by noise-contrastive estimation (NCE, also known
as contrastive learning), which aims to learn invariant visual representations
by contrasting one homogeneous image pair with a large number of heterogeneous
image pairs in each training step. Nonetheless, NCE-based approaches still
suffer from one major problem that is one homogeneous pair is not enough to
extract robust and invariant semantic information. Inspired by the archetypical
triplet loss, we propose GraVIS, which is specifically optimized for learning
self-supervised features from dermatology images, to group homogeneous
dermatology images while separating heterogeneous ones. In addition, a
hardness-aware attention is introduced and incorporated to address the
importance of homogeneous image views with similar appearance instead of those
dissimilar homogeneous ones. GraVIS significantly outperforms its transfer
learning and self-supervised learning counterparts in both lesion segmentation
and disease classification tasks, sometimes by 5 percents under extremely
limited supervision. More importantly, when equipped with the pre-trained
weights provided by GraVIS, a single model could achieve better results than
winners that heavily rely on ensemble strategies in the well-known ISIC 2017
challenge.
Related papers
- Fine-Grained Self-Supervised Learning with Jigsaw Puzzles for Medical
Image Classification [11.320414512937946]
Classifying fine-grained lesions is challenging due to minor and subtle differences in medical images.
We introduce Fine-Grained Self-Supervised Learning(FG-SSL) method for classifying subtle lesions in medical images.
We evaluate the proposed fine-grained self-supervised learning method on comprehensive experiments using various medical image recognition datasets.
arXiv Detail & Related papers (2023-08-10T02:08:15Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - Lesion-Aware Contrastive Representation Learning for Histopathology
Whole Slide Images Analysis [16.264758789726223]
We propose a novel contrastive representation learning framework named Lesion-Aware Contrastive Learning (LACL) for histopathology whole slide image analysis.
The experimental results demonstrate that LACL achieves the best performance in histopathology image representation learning on different datasets.
arXiv Detail & Related papers (2022-06-27T08:39:51Z) - Self Supervised Lesion Recognition For Breast Ultrasound Diagnosis [14.961717874372567]
We propose a multi-task framework that complements Benign/Malignant classification task with lesion recognition (LR)
To be specific, LR task employs contrastive learning to encourage representation that pulls multiple views of the same lesion and repels those of different lesions.
Experiments show that the proposed multi-task framework boosts the performance of Benign/Malignant classification.
arXiv Detail & Related papers (2022-04-18T16:00:33Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Lesion-based Contrastive Learning for Diabetic Retinopathy Grading from
Fundus Images [2.498907460918493]
We propose a self-supervised framework, namely lesion-based contrastive learning for automated diabetic retinopathy grading.
Our proposed framework performs outstandingly on DR grading in terms of both linear evaluation and transfer capacity evaluation.
arXiv Detail & Related papers (2021-07-17T16:30:30Z) - Magnification-independent Histopathological Image Classification with
Similarity-based Multi-scale Embeddings [12.398787062519034]
We propose an approach that learns similarity-based multi-scale embeddings for magnification-independent image classification.
In particular, a pair loss and a triplet loss are leveraged to learn similarity-based embeddings from image pairs or image triplets.
The SMSE achieves the best performance on the BreakHis benchmark with an improvement ranging from 5% to 18% compared to previous methods.
arXiv Detail & Related papers (2021-07-02T13:18:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.