A Weakly Supervised Learning Framework for Salient Object Detection via
Hybrid Labels
- URL: http://arxiv.org/abs/2209.02957v1
- Date: Wed, 7 Sep 2022 06:45:39 GMT
- Title: A Weakly Supervised Learning Framework for Salient Object Detection via
Hybrid Labels
- Authors: Runmin Cong, Qi Qin, Chen Zhang, Qiuping Jiang, Shiqi Wang, Yao Zhao,
and Sam Kwong
- Abstract summary: This paper focuses on a new weakly-supervised salient object detection (SOD) task under hybrid labels.
To address the issues of label noise and quantity imbalance in this task, we design a new pipeline framework with three sophisticated training strategies.
Experiments on five SOD benchmarks show that our method achieves competitive performance against weakly-supervised/unsupervised methods.
- Score: 96.56299163691979
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Fully-supervised salient object detection (SOD) methods have made great
progress, but such methods often rely on a large number of pixel-level
annotations, which are time-consuming and labour-intensive. In this paper, we
focus on a new weakly-supervised SOD task under hybrid labels, where the
supervision labels include a large number of coarse labels generated by the
traditional unsupervised method and a small number of real labels. To address
the issues of label noise and quantity imbalance in this task, we design a new
pipeline framework with three sophisticated training strategies. In terms of
model framework, we decouple the task into label refinement sub-task and
salient object detection sub-task, which cooperate with each other and train
alternately. Specifically, the R-Net is designed as a two-stream
encoder-decoder model equipped with Blender with Guidance and Aggregation
Mechanisms (BGA), aiming to rectify the coarse labels for more reliable
pseudo-labels, while the S-Net is a replaceable SOD network supervised by the
pseudo labels generated by the current R-Net. Note that, we only need to use
the trained S-Net for testing. Moreover, in order to guarantee the
effectiveness and efficiency of network training, we design three training
strategies, including alternate iteration mechanism, group-wise incremental
mechanism, and credibility verification mechanism. Experiments on five SOD
benchmarks show that our method achieves competitive performance against
weakly-supervised/unsupervised methods both qualitatively and quantitatively.
Related papers
- Unified Unsupervised Salient Object Detection via Knowledge Transfer [29.324193170890542]
Unsupervised salient object detection (USOD) has gained increasing attention due to its annotation-free nature.
In this paper, we propose a unified USOD framework for generic USOD tasks.
arXiv Detail & Related papers (2024-04-23T05:50:02Z) - Revisiting Class Imbalance for End-to-end Semi-Supervised Object
Detection [1.6249267147413524]
Semi-supervised object detection (SSOD) has made significant progress with the development of pseudo-label-based end-to-end methods.
Many methods face challenges due to class imbalance, which hinders the effectiveness of the pseudo-label generator.
In this paper, we examine the root causes of low-quality pseudo-labels and present novel learning mechanisms to improve the label generation quality.
arXiv Detail & Related papers (2023-06-04T06:01:53Z) - Unsupervised Meta-Learning via Few-shot Pseudo-supervised Contrastive
Learning [72.3506897990639]
We propose a simple yet effective unsupervised meta-learning framework, coined Pseudo-supervised Contrast (PsCo) for few-shot classification.
PsCo outperforms existing unsupervised meta-learning methods under various in-domain and cross-domain few-shot classification benchmarks.
arXiv Detail & Related papers (2023-03-02T06:10:13Z) - Image Understands Point Cloud: Weakly Supervised 3D Semantic
Segmentation via Association Learning [59.64695628433855]
We propose a novel cross-modality weakly supervised method for 3D segmentation, incorporating complementary information from unlabeled images.
Basically, we design a dual-branch network equipped with an active labeling strategy, to maximize the power of tiny parts of labels.
Our method even outperforms the state-of-the-art fully supervised competitors with less than 1% actively selected annotations.
arXiv Detail & Related papers (2022-09-16T07:59:04Z) - Collaborative Propagation on Multiple Instance Graphs for 3D Instance
Segmentation with Single-point Supervision [63.429704654271475]
We propose a novel weakly supervised method RWSeg that only requires labeling one object with one point.
With these sparse weak labels, we introduce a unified framework with two branches to propagate semantic and instance information.
Specifically, we propose a Cross-graph Competing Random Walks (CRW) algorithm that encourages competition among different instance graphs.
arXiv Detail & Related papers (2022-08-10T02:14:39Z) - CLS: Cross Labeling Supervision for Semi-Supervised Learning [9.929229055862491]
Cross Labeling Supervision ( CLS) is a framework that generalizes the typical pseudo-labeling process.
CLS allows the creation of both pseudo and complementary labels to support both positive and negative learning.
arXiv Detail & Related papers (2022-02-17T08:09:40Z) - Activation to Saliency: Forming High-Quality Labels for Unsupervised
Salient Object Detection [54.92703325989853]
We propose a two-stage Activation-to-Saliency (A2S) framework that effectively generates high-quality saliency cues.
No human annotations are involved in our framework during the whole training process.
Our framework reports significant performance compared with existing USOD methods.
arXiv Detail & Related papers (2021-12-07T11:54:06Z) - Deep Recurrent Semi-Supervised EEG Representation Learning for Emotion
Recognition [14.67085109524245]
EEG-based emotion recognition often requires sufficient labeled training samples to build an effective computational model.
We propose a semi-supervised pipeline to jointly exploit both unlabeled and labeled data for learning EEG representations.
We test our framework on the large-scale SEED EEG dataset and compare our results with several other popular semi-supervised methods.
arXiv Detail & Related papers (2021-07-28T17:21:30Z) - PseudoSeg: Designing Pseudo Labels for Semantic Segmentation [78.35515004654553]
We present a re-design of pseudo-labeling to generate structured pseudo labels for training with unlabeled or weakly-labeled data.
We demonstrate the effectiveness of the proposed pseudo-labeling strategy in both low-data and high-data regimes.
arXiv Detail & Related papers (2020-10-19T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.