MetaSSL: A General Heterogeneous Loss for Semi-Supervised Medical Image Segmentation
- URL: http://arxiv.org/abs/2509.01144v1
- Date: Mon, 01 Sep 2025 05:45:08 GMT
- Title: MetaSSL: A General Heterogeneous Loss for Semi-Supervised Medical Image Segmentation
- Authors: Weiren Zhao, Lanfeng Zhong, Xin Liao, Wenjun Liao, Sichuan Zhang, Shaoting Zhang, Guotai Wang,
- Abstract summary: Semi-Supervised Learning is important for reducing the annotation cost for medical image segmentation models.<n>We propose a universal framework MetaSSL based on a spatially heterogeneous loss.<n>Our method is plug-and-play and general to most existing SSL methods.
- Score: 26.75334172633309
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semi-Supervised Learning (SSL) is important for reducing the annotation cost for medical image segmentation models. State-of-the-art SSL methods such as Mean Teacher, FixMatch and Cross Pseudo Supervision (CPS) are mainly based on consistency regularization or pseudo-label supervision between a reference prediction and a supervised prediction. Despite the effectiveness, they have overlooked the potential noise in the labeled data, and mainly focus on strategies to generate the reference prediction, while ignoring the heterogeneous values of different unlabeled pixels. We argue that effectively mining the rich information contained by the two predictions in the loss function, instead of the specific strategy to obtain a reference prediction, is more essential for SSL, and propose a universal framework MetaSSL based on a spatially heterogeneous loss that assigns different weights to pixels by simultaneously leveraging the uncertainty and consistency information between the reference and supervised predictions. Specifically, we split the predictions on unlabeled data into four regions with decreasing weights in the loss: Unanimous and Confident (UC), Unanimous and Suspicious (US), Discrepant and Confident (DC), and Discrepant and Suspicious (DS), where an adaptive threshold is proposed to distinguish confident predictions from suspicious ones. The heterogeneous loss is also applied to labeled images for robust learning considering the potential annotation noise. Our method is plug-and-play and general to most existing SSL methods. The experimental results showed that it improved the segmentation performance significantly when integrated with existing SSL frameworks on different datasets. Code is available at https://github.com/HiLab-git/MetaSSL.
Related papers
- When Confidence Fails: Revisiting Pseudo-Label Selection in Semi-supervised Semantic Segmentation [15.149171763610662]
We present Confidence Separable Learning (CSL) as a convex optimization problem within the confidence distribution feature space.<n>CSL formulates pseudo-label selection as a convex optimization problem within the confidence distribution feature space.<n>We show that CSL performs favorably against state-of-the-art methods.
arXiv Detail & Related papers (2025-09-20T14:23:09Z) - MaxSup: Overcoming Representation Collapse in Label Smoothing [52.66247931969715]
Label Smoothing (LS) is widely adopted to reduce overconfidence in neural network predictions.<n>LS compacts feature representations into overly tight clusters, diluting intra-class diversity.<n>We propose Max Suppression (MaxSup), which applies uniform regularization to both correct and incorrect predictions.
arXiv Detail & Related papers (2025-02-18T20:10:34Z) - A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Do not trust what you trust: Miscalibration in Semi-supervised Learning [21.20806568508201]
State-of-the-art semi-supervised learning (SSL) approaches rely on highly confident predictions to serve as pseudo-labels that guide the training on unlabeled samples.
We show that SSL methods based on pseudo-labels are significantly miscalibrated, and formally demonstrate the minimization of the min-entropy.
We integrate a simple penalty term, which enforces the logit of the predictions on unlabeled samples to remain low, preventing the network predictions to become overconfident.
arXiv Detail & Related papers (2024-03-22T18:43:46Z) - Discrepancy Matters: Learning from Inconsistent Decoder Features for
Consistent Semi-supervised Medical Image Segmentation [16.136085351887814]
We propose a novel semi-supervised learning method called LeFeD.
LeFeD learns the feature-level discrepancy obtained from decoders by feeding the discrepancy as a feedback signal to the encoder.
LeFeD surpasses competitors without any bells and whistles such as uncertainty estimation and strong constraints.
arXiv Detail & Related papers (2023-09-26T10:33:20Z) - Zero-Shot Learning by Harnessing Adversarial Samples [52.09717785644816]
We propose a novel Zero-Shot Learning (ZSL) approach by Harnessing Adversarial Samples (HAS)
HAS advances ZSL through adversarial training which takes into account three crucial aspects.
We demonstrate the effectiveness of our adversarial samples approach in both ZSL and Generalized Zero-Shot Learning (GZSL) scenarios.
arXiv Detail & Related papers (2023-08-01T06:19:13Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - Complementing Semi-Supervised Learning with Uncertainty Quantification [6.612035830987296]
We propose a novel unsupervised uncertainty-aware objective that relies on aleatoric and epistemic uncertainty quantification.
Our results outperform the state-of-the-art results on complex datasets such as CIFAR-100 and Mini-ImageNet.
arXiv Detail & Related papers (2022-07-22T00:15:02Z) - Taming Overconfident Prediction on Unlabeled Data from Hindsight [50.9088560433925]
Minimizing prediction uncertainty on unlabeled data is a key factor to achieve good performance in semi-supervised learning.
This paper proposes a dual mechanism, named ADaptive Sharpening (ADS), which first applies a soft-threshold to adaptively mask out determinate and negligible predictions.
ADS significantly improves the state-of-the-art SSL methods by making it a plug-in.
arXiv Detail & Related papers (2021-12-15T15:17:02Z) - MisMatch: Calibrated Segmentation via Consistency on Differential
Morphological Feature Perturbations with Limited Labels [5.500466607182699]
Semi-supervised learning is a promising paradigm to address the issue of label scarcity in medical imaging.
MisMatch is a semi-supervised segmentation framework based on the consistency between paired predictions.
arXiv Detail & Related papers (2021-10-23T09:22:41Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Semi-supervised learning objectives as log-likelihoods in a generative
model of data curation [32.45282187405337]
We formulate SSL objectives as a log-likelihood in a generative model of data curation.
We give a proof-of-principle for Bayesian SSL on toy data.
arXiv Detail & Related papers (2020-08-13T13:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.