AdaWAC: Adaptively Weighted Augmentation Consistency Regularization for
Volumetric Medical Image Segmentation
- URL: http://arxiv.org/abs/2210.01891v1
- Date: Tue, 4 Oct 2022 20:28:38 GMT
- Title: AdaWAC: Adaptively Weighted Augmentation Consistency Regularization for
Volumetric Medical Image Segmentation
- Authors: Yijun Dong, Yuege Xie, Rachel Ward
- Abstract summary: We propose an adaptive weighting algorithm for volumetric medical image segmentation.
AdaWAC assigns label-dense samples to supervised cross-entropy loss and label-sparse samples to consistency regularization.
We empirically demonstrate that AdaWAC not only enhances segmentation performance and sample efficiency but also improves robustness to the subpopulation shift in labels.
- Score: 3.609538870261841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sample reweighting is an effective strategy for learning from training data
coming from a mixture of subpopulations. In volumetric medical image
segmentation, the data inputs are similarly distributed, but the associated
data labels fall into two subpopulations -- "label-sparse" and "label-dense" --
depending on whether the data image occurs near the beginning/end of the
volumetric scan or the middle. Existing reweighting algorithms have focused on
hard- and soft- thresholding of the label-sparse data, which results in loss of
information and reduced sample efficiency by discarding valuable data input.
For this setting, we propose AdaWAC as an adaptive weighting algorithm that
introduces a set of trainable weights which, at the saddle point of the
underlying objective, assigns label-dense samples to supervised cross-entropy
loss and label-sparse samples to unsupervised consistency regularization. We
provide a convergence guarantee for AdaWAC by recasting the optimization as
online mirror descent on a saddle point problem. Moreover, we empirically
demonstrate that AdaWAC not only enhances segmentation performance and sample
efficiency but also improves robustness to the subpopulation shift in labels.
Related papers
- Guiding Pseudo-labels with Uncertainty Estimation for Test-Time
Adaptation [27.233704767025174]
Test-Time Adaptation (TTA) is a specific case of Unsupervised Domain Adaptation (UDA) where a model is adapted to a target domain without access to source data.
We propose a novel approach for the TTA setting based on a loss reweighting strategy that brings robustness against the noise that inevitably affects the pseudo-labels.
arXiv Detail & Related papers (2023-03-07T10:04:55Z) - Neighbour Consistency Guided Pseudo-Label Refinement for Unsupervised
Person Re-Identification [80.98291772215154]
Unsupervised person re-identification (ReID) aims at learning discriminative identity features for person retrieval without any annotations.
Recent advances accomplish this task by leveraging clustering-based pseudo labels.
We propose a Neighbour Consistency guided Pseudo Label Refinement framework.
arXiv Detail & Related papers (2022-11-30T09:39:57Z) - Dense FixMatch: a simple semi-supervised learning method for pixel-wise
prediction tasks [68.36996813591425]
We propose Dense FixMatch, a simple method for online semi-supervised learning of dense and structured prediction tasks.
We enable the application of FixMatch in semi-supervised learning problems beyond image classification by adding a matching operation on the pseudo-labels.
Dense FixMatch significantly improves results compared to supervised learning using only labeled data, approaching its performance with 1/4 of the labeled samples.
arXiv Detail & Related papers (2022-10-18T15:02:51Z) - S3: Supervised Self-supervised Learning under Label Noise [53.02249460567745]
In this paper we address the problem of classification in the presence of label noise.
In the heart of our method is a sample selection mechanism that relies on the consistency between the annotated label of a sample and the distribution of the labels in its neighborhood in the feature space.
Our method significantly surpasses previous methods on both CIFARCIFAR100 with artificial noise and real-world noisy datasets such as WebVision and ANIMAL-10N.
arXiv Detail & Related papers (2021-11-22T15:49:20Z) - Rethinking Pseudo Labels for Semi-Supervised Object Detection [84.697097472401]
We introduce certainty-aware pseudo labels tailored for object detection.
We dynamically adjust the thresholds used to generate pseudo labels and reweight loss functions for each category to alleviate the class imbalance problem.
Our approach improves supervised baselines by up to 10% AP using only 1-10% labeled data from COCO.
arXiv Detail & Related papers (2021-06-01T01:32:03Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z) - Rethinking Curriculum Learning with Incremental Labels and Adaptive
Compensation [35.593312267921256]
Like humans, deep networks have been shown to learn better when samples are organized and introduced in a meaningful order or curriculum.
We propose Learning with Incremental Labels and Adaptive Compensation (LILAC), a two-phase method that incrementally increases the number of unique output labels.
arXiv Detail & Related papers (2020-01-13T21:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.