ProCo: Prototype-aware Contrastive Learning for Long-tailed Medical
Image Classification
- URL: http://arxiv.org/abs/2209.00183v1
- Date: Thu, 1 Sep 2022 02:24:16 GMT
- Title: ProCo: Prototype-aware Contrastive Learning for Long-tailed Medical
Image Classification
- Authors: Zhixiong Yang, Junwen Pan, Yanzhan Yang, Xiaozhou Shi, Hong-Yu Zhou,
Zhicheng Zhang, and Cheng Bian
- Abstract summary: We adopt the contrastive learning to tackle the long-tailed medical imbalance problem.
The overall framework, namely as Prototype-aware Contrastive learning (ProCo), is unified as a single-stage pipeline.
Our method outperforms the existing state-of-the-art methods by a large margin.
- Score: 12.399428395862639
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical image classification has been widely adopted in medical image
analysis. However, due to the difficulty of collecting and labeling data in the
medical area, medical image datasets are usually highly-imbalanced. To address
this problem, previous works utilized class samples as prior for re-weighting
or re-sampling but the feature representation is usually still not
discriminative enough. In this paper, we adopt the contrastive learning to
tackle the long-tailed medical imbalance problem. Specifically, we first
propose the category prototype and adversarial proto-instance to generate
representative contrastive pairs. Then, the prototype recalibration strategy is
proposed to address the highly imbalanced data distribution. Finally, a unified
proto-loss is designed to train our framework. The overall framework, namely as
Prototype-aware Contrastive learning (ProCo), is unified as a single-stage
pipeline in an end-to-end manner to alleviate the imbalanced problem in medical
image classification, which is also a distinct progress than existing works as
they follow the traditional two-stage pipeline. Extensive experiments on two
highly-imbalanced medical image classification datasets demonstrate that our
method outperforms the existing state-of-the-art methods by a large margin.
Related papers
- Plug-and-Play Feature Generation for Few-Shot Medical Image
Classification [23.969183389866686]
Few-shot learning presents immense potential in enhancing model generalization and practicality for medical image classification with limited training data.
We propose MedMFG, a flexible and lightweight plug-and-play method designed to generate sufficient class-distinctive features from limited samples.
arXiv Detail & Related papers (2023-10-14T02:36:14Z) - Realistic Data Enrichment for Robust Image Segmentation in
Histopathology [2.248423960136122]
We propose a new approach, based on diffusion models, which can enrich an imbalanced dataset with plausible examples from underrepresented groups.
Our method can simply expand limited clinical datasets making them suitable to train machine learning pipelines.
arXiv Detail & Related papers (2023-04-19T09:52:50Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels [54.58539616385138]
We introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owN Anatomy (MONA)
First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features.
Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features.
arXiv Detail & Related papers (2022-09-27T15:50:31Z) - Learning Discriminative Representation via Metric Learning for
Imbalanced Medical Image Classification [52.94051907952536]
We propose embedding metric learning into the first stage of the two-stage framework specially to help the feature extractor learn to extract more discriminative feature representations.
Experiments mainly on three medical image datasets show that the proposed approach consistently outperforms existing onestage and two-stage approaches.
arXiv Detail & Related papers (2022-07-14T14:57:01Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Contrastive Registration for Unsupervised Medical Image Segmentation [1.5125686694430571]
We present a novel contrastive registration architecture for unsupervised medical image segmentation.
Firstly, we propose an architecture to capture the image-to-image transformation pattern via registration for unsupervised medical image segmentation.
Secondly, we embed a contrastive learning mechanism into the registration architecture to enhance the discriminating capacity of the network in the feature-level.
arXiv Detail & Related papers (2020-11-17T19:29:08Z) - Collaborative Unsupervised Domain Adaptation for Medical Image Diagnosis [102.40869566439514]
We seek to exploit rich labeled data from relevant domains to help the learning in the target task via Unsupervised Domain Adaptation (UDA)
Unlike most UDA methods that rely on clean labeled data or assume samples are equally transferable, we innovatively propose a Collaborative Unsupervised Domain Adaptation algorithm.
We theoretically analyze the generalization performance of the proposed method, and also empirically evaluate it on both medical and general images.
arXiv Detail & Related papers (2020-07-05T11:49:17Z) - Semi-supervised Medical Image Classification with Relation-driven
Self-ensembling Model [71.80319052891817]
We present a relation-driven semi-supervised framework for medical image classification.
It exploits the unlabeled data by encouraging the prediction consistency of given input under perturbations.
Our method outperforms many state-of-the-art semi-supervised learning methods on both single-label and multi-label image classification scenarios.
arXiv Detail & Related papers (2020-05-15T06:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.