Contrastive Centroid Supervision Alleviates Domain Shift in Medical
Image Classification
- URL: http://arxiv.org/abs/2205.15658v1
- Date: Tue, 31 May 2022 09:54:17 GMT
- Title: Contrastive Centroid Supervision Alleviates Domain Shift in Medical
Image Classification
- Authors: Wenshuo Zhou, Dalu Yang, Binghong Wu, Yehui Yang, Junde Wu, Xiaorong
Wang, Lei Wang, Haifeng Huang, Yanwu Xu
- Abstract summary: Feature Centroid Contrast Learning (FCCL) can improve target domain classification performance by extra supervision during training.
We verify through extensive experiments that FCCL can achieve superior performance on at least three imaging modalities.
- Score: 9.709678461254972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based medical imaging classification models usually suffer from
the domain shift problem, where the classification performance drops when
training data and real-world data differ in imaging equipment manufacturer,
image acquisition protocol, patient populations, etc. We propose Feature
Centroid Contrast Learning (FCCL), which can improve target domain
classification performance by extra supervision during training with
contrastive loss between instance and class centroid. Compared with current
unsupervised domain adaptation and domain generalization methods, FCCL performs
better while only requires labeled image data from a single source domain and
no target domain. We verify through extensive experiments that FCCL can achieve
superior performance on at least three imaging modalities, i.e. fundus
photographs, dermatoscopic images, and H & E tissue images.
Related papers
- Do Vision Foundation Models Enhance Domain Generalization in Medical Image Segmentation? [10.20366295974822]
We introduce a novel decode head architecture, HQHSAM, which simply integrates elements from two state-of-the-art decoder heads, HSAM and HQSAM, to enhance segmentation performance.
Our experiments on multiple datasets, encompassing various anatomies and modalities, reveal that FMs, particularly with the HQHSAM decode head, improve domain generalization for medical image segmentation.
arXiv Detail & Related papers (2024-09-12T11:41:35Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Forward-Forward Contrastive Learning [4.465144120325802]
We propose Forward Forward Contrastive Learning (FFCL) as a novel pretraining approach for medical image classification.
FFCL achieves superior performance (3.69% accuracy over ImageNet pretrained ResNet-18) over existing pretraining models in the pneumonia classification task.
arXiv Detail & Related papers (2023-05-04T15:29:06Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Vision-Language Modelling For Radiological Imaging and Reports In The
Low Data Regime [70.04389979779195]
This paper explores training medical vision-language models (VLMs) where the visual and language inputs are embedded into a common space.
We explore several candidate methods to improve low-data performance, including adapting generic pre-trained models to novel image and text domains.
Using text-to-image retrieval as a benchmark, we evaluate the performance of these methods with variable sized training datasets of paired chest X-rays and radiological reports.
arXiv Detail & Related papers (2023-03-30T18:20:00Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - Multi-domain stain normalization for digital pathology: A
cycle-consistent adversarial network for whole slide images [0.0]
We propose MultiStain-CycleGAN, a multi-domain approach to stain normalization based on CycleGAN.
Our modifications to CycleGAN allow us to normalize images of different origins without retraining or using different models.
arXiv Detail & Related papers (2023-01-23T13:34:49Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Positional Contrastive Learning for Volumetric Medical Image
Segmentation [13.086140606803408]
We propose a novel positional contrastive learning framework to generate contrastive data pairs.
The proposed PCL method can substantially improve the segmentation performance compared to existing methods in both semi-supervised setting and transfer learning setting.
arXiv Detail & Related papers (2021-06-16T22:15:28Z) - Domain adaptation based self-correction model for COVID-19 infection
segmentation in CT images [23.496487874821756]
We propose a domain adaptation based self-correction model (DASC-Net) for COVID-19 infection segmentation on CT images.
DASC-Net consists of a novel attention and feature domain enhanced domain adaptation model (AFD-DA) to solve the domain shifts and a self-correction learning process to refine results.
Extensive experiments over three publicly available COVID-19 CT datasets demonstrate that DASC-Net consistently outperforms state-of-the-art segmentation, domain shift, and coronavirus infection segmentation methods.
arXiv Detail & Related papers (2021-04-20T00:45:01Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.