Semi-supervised Salient Object Detection with Effective Confidence
Estimation
- URL: http://arxiv.org/abs/2112.14019v2
- Date: Sun, 26 Nov 2023 00:40:15 GMT
- Title: Semi-supervised Salient Object Detection with Effective Confidence
Estimation
- Authors: Jiawei Liu, Jing Zhang, Nick Barnes
- Abstract summary: We study semi-supervised salient object detection with access to a small number of labeled samples and a large number of unlabeled samples.
We model the nature of human saliency labels using the latent variable of the Conditional Energy-based Model.
With only 1/16 labeled samples, our model achieves competitive performance compared with state-of-the-art fully-supervised models.
- Score: 35.0990691497574
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The success of existing salient object detection models relies on a large
pixel-wise labeled training dataset, which is time-consuming and expensive to
obtain. We study semi-supervised salient object detection, with access to a
small number of labeled samples and a large number of unlabeled samples.
Specifically, we present a pseudo label based learn-ing framework with a
Conditional Energy-based Model. We model the stochastic nature of human
saliency labels using the stochastic latent variable of the Conditional
Energy-based Model. It further enables generation of a high-quality pixel-wise
uncertainty map, highlighting the reliability of corresponding pseudo label
generated for the unlabeled sample. This minimises the contribution of
low-certainty pseudo labels in optimising the model, preventing the error
propagation. Experimental results show that the proposed strategy can
effectively explore the contribution of unlabeled data. With only 1/16 labeled
samples, our model achieves competitive performance compared with
state-of-the-art fully-supervised models.
Related papers
- TrajSSL: Trajectory-Enhanced Semi-Supervised 3D Object Detection [59.498894868956306]
Pseudo-labeling approaches to semi-supervised learning adopt a teacher-student framework.
We leverage pre-trained motion-forecasting models to generate object trajectories on pseudo-labeled data.
Our approach improves pseudo-label quality in two distinct manners.
arXiv Detail & Related papers (2024-09-17T05:35:00Z) - Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Sparse Generation: Making Pseudo Labels Sparse for weakly supervision with points [2.2241974678268903]
We consider the generation of weakly supervised pseudo labels as the result of model's sparse output.
We propose a method called Sparse Generation to make pseudo labels sparse.
arXiv Detail & Related papers (2024-03-28T10:42:49Z) - Perceptual Quality-based Model Training under Annotator Label Uncertainty [15.015925663078377]
Annotators exhibit disagreement during data labeling, which can be termed as annotator label uncertainty.
We introduce a novel perceptual quality-based model training framework to objectively generate multiple labels for model training.
arXiv Detail & Related papers (2024-03-15T10:52:18Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - SoftMatch: Addressing the Quantity-Quality Trade-off in Semi-supervised
Learning [101.86916775218403]
This paper revisits the popular pseudo-labeling methods via a unified sample weighting formulation.
We propose SoftMatch to overcome the trade-off by maintaining both high quantity and high quality of pseudo-labels during training.
In experiments, SoftMatch shows substantial improvements across a wide variety of benchmarks, including image, text, and imbalanced classification.
arXiv Detail & Related papers (2023-01-26T03:53:25Z) - Rethinking Precision of Pseudo Label: Test-Time Adaptation via
Complementary Learning [10.396596055773012]
We propose a novel complementary learning approach to enhance test-time adaptation.
In test-time adaptation tasks, information from the source domain is typically unavailable.
We highlight that the risk function of complementary labels agrees with their Vanilla loss formula.
arXiv Detail & Related papers (2023-01-15T03:36:33Z) - Seq-UPS: Sequential Uncertainty-aware Pseudo-label Selection for
Semi-Supervised Text Recognition [21.583569162994277]
One of the most popular SSL approaches is pseudo-labeling (PL)
PL methods are severely degraded by noise and are prone to over-fitting to noisy labels.
We propose a pseudo-label generation and an uncertainty-based data selection framework for text recognition.
arXiv Detail & Related papers (2022-08-31T02:21:02Z) - Confidence Adaptive Regularization for Deep Learning with Noisy Labels [2.0349696181833337]
Recent studies on the memorization effects of deep neural networks on noisy labels show that the networks first fit the correctly-labeled training samples before memorizing the mislabeled samples.
Motivated by this early-learning phenomenon, we propose a novel method to prevent memorization of the mislabeled samples.
We provide the theoretical analysis and conduct the experiments on synthetic and real-world datasets, demonstrating that our approach achieves comparable results to the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-18T15:51:25Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.