Inconsistency Masks: Removing the Uncertainty from Input-Pseudo-Label Pairs
- URL: http://arxiv.org/abs/2401.14387v2
- Date: Sat, 13 Apr 2024 12:26:44 GMT
- Title: Inconsistency Masks: Removing the Uncertainty from Input-Pseudo-Label Pairs
- Authors: Michael R. H. Vorndran, Bernhard F. Roeck,
- Abstract summary: Inconsistency Masks (IM) is a novel approach that filters uncertainty in image-pseudo-label pairs to substantially enhance segmentation quality.
We achieve strong segmentation results with as little as 10% labeled data, across four diverse datasets.
Three of our hybrid approaches even outperform models trained on the fully labeled dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Efficiently generating sufficient labeled data remains a major bottleneck in deep learning, particularly for image segmentation tasks where labeling requires significant time and effort. This study tackles this issue in a resource-constrained environment, devoid of extensive datasets or pre-existing models. We introduce Inconsistency Masks (IM), a novel approach that filters uncertainty in image-pseudo-label pairs to substantially enhance segmentation quality, surpassing traditional semi-supervised learning techniques. Employing IM, we achieve strong segmentation results with as little as 10% labeled data, across four diverse datasets and it further benefits from integration with other techniques, indicating broad applicability. Notably on the ISIC 2018 dataset, three of our hybrid approaches even outperform models trained on the fully labeled dataset. We also present a detailed comparative analysis of prevalent semi-supervised learning strategies, all under uniform starting conditions, to underline our approach's effectiveness and robustness. The full code is available at: https://github.com/MichaelVorndran/InconsistencyMasks
Related papers
- Continuous Contrastive Learning for Long-Tailed Semi-Supervised Recognition [50.61991746981703]
Current state-of-the-art LTSSL approaches rely on high-quality pseudo-labels for large-scale unlabeled data.
This paper introduces a novel probabilistic framework that unifies various recent proposals in long-tail learning.
We introduce a continuous contrastive learning method, CCL, extending our framework to unlabeled data using reliable and smoothed pseudo-labels.
arXiv Detail & Related papers (2024-10-08T15:06:10Z) - Semi-supervised Medical Image Segmentation Method Based on Cross-pseudo
Labeling Leveraging Strong and Weak Data Augmentation Strategies [2.8246591681333024]
This paper proposes a semi-supervised model, DFCPS, which innovatively incorporates the Fixmatch concept.
Cross-pseudo-supervision is introduced, integrating consistency learning with self-training.
Our model consistently exhibits superior performance across all four subdivisions containing different proportions of unlabeled data.
arXiv Detail & Related papers (2024-02-17T13:07:44Z) - Contrastive Multiple Instance Learning for Weakly Supervised Person ReID [50.04900262181093]
We introduce Contrastive Multiple Instance Learning (CMIL), a novel framework tailored for more effective weakly supervised ReID.
CMIL distinguishes itself by requiring only a single model and no pseudo labels while leveraging contrastive losses.
We release the WL-MUDD dataset, an extension of the MUDD dataset featuring naturally occurring weak labels from the real-world application at PerformancePhoto.co.
arXiv Detail & Related papers (2024-02-12T14:48:31Z) - Empowering HWNs with Efficient Data Labeling: A Clustered Federated
Semi-Supervised Learning Approach [2.046985601687158]
Clustered Federated Multitask Learning (CFL) has gained considerable attention as an effective strategy for overcoming statistical challenges.
We introduce a novel framework, Clustered Federated Semi-Supervised Learning (CFSL), designed for more realistic HWN scenarios.
Our results demonstrate that CFSL significantly improves upon key metrics such as testing accuracy, labeling accuracy, and labeling latency under varying proportions of labeled and unlabeled data.
arXiv Detail & Related papers (2024-01-19T11:47:49Z) - Segment Together: A Versatile Paradigm for Semi-Supervised Medical Image
Segmentation [17.69933345468061]
scarcity has become a major obstacle for training powerful deep-learning models for medical image segmentation.
We introduce a textbfVersatile textbfSemi-supervised framework to exploit more unlabeled data for semi-supervised medical image segmentation.
arXiv Detail & Related papers (2023-11-20T11:35:52Z) - MaskCon: Masked Contrastive Learning for Coarse-Labelled Dataset [19.45520684918576]
We propose a contrastive learning method, called $textbfMask$ed $textbfCon$trastive learning($textbfMaskCon$)
For each sample our method generates soft-labels with the aid of coarse labels against other samples and another augmented view of the sample in question.
Our method achieves significant improvement over the current state-of-the-art in various datasets.
arXiv Detail & Related papers (2023-03-22T17:08:31Z) - Is margin all you need? An extensive empirical study of active learning
on tabular data [66.18464006872345]
We analyze the performance of a variety of active learning algorithms on 69 real-world datasets from the OpenML-CC18 benchmark.
Surprisingly, we find that the classical margin sampling technique matches or outperforms all others, including current state-of-art.
arXiv Detail & Related papers (2022-10-07T21:18:24Z) - Learning Semantic Segmentation from Multiple Datasets with Label Shifts [101.24334184653355]
This paper proposes UniSeg, an effective approach to automatically train models across multiple datasets with differing label spaces.
Specifically, we propose two losses that account for conflicting and co-occurring labels to achieve better generalization performance in unseen domains.
arXiv Detail & Related papers (2022-02-28T18:55:19Z) - GuidedMix-Net: Learning to Improve Pseudo Masks Using Labeled Images as
Reference [153.354332374204]
We propose a novel method for semi-supervised semantic segmentation named GuidedMix-Net.
We first introduce a feature alignment objective between labeled and unlabeled data to capture potentially similar image pairs.
MITrans is shown to be a powerful knowledge module for further progressive refining features of unlabeled data.
Along with supervised learning for labeled data, the prediction of unlabeled data is jointly learned with the generated pseudo masks.
arXiv Detail & Related papers (2021-06-29T02:48:45Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.