Synthesize Boundaries: A Boundary-aware Self-consistent Framework for
Weakly Supervised Salient Object Detection
- URL: http://arxiv.org/abs/2212.01764v1
- Date: Sun, 4 Dec 2022 08:22:45 GMT
- Title: Synthesize Boundaries: A Boundary-aware Self-consistent Framework for
Weakly Supervised Salient Object Detection
- Authors: Binwei Xu, Haoran Liang, Ronghua Liang, Peng Chen
- Abstract summary: We propose to learn precise boundaries from our designed synthetic images and labels.
The synthetic image creates boundary information by inserting synthetic concave regions that simulate the real concave regions of salient objects.
We also propose a novel self-consistent framework that consists of a global integral branch (GIB) and a boundary-aware branch (BAB) to train a saliency detector.
- Score: 8.951168425295378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fully supervised salient object detection (SOD) has made considerable
progress based on expensive and time-consuming data with pixel-wise
annotations. Recently, to relieve the labeling burden while maintaining
performance, some scribble-based SOD methods have been proposed. However,
learning precise boundary details from scribble annotations that lack edge
information is still difficult. In this paper, we propose to learn precise
boundaries from our designed synthetic images and labels without introducing
any extra auxiliary data. The synthetic image creates boundary information by
inserting synthetic concave regions that simulate the real concave regions of
salient objects. Furthermore, we propose a novel self-consistent framework that
consists of a global integral branch (GIB) and a boundary-aware branch (BAB) to
train a saliency detector. GIB aims to identify integral salient objects, whose
input is the original image. BAB aims to help predict accurate boundaries,
whose input is the synthetic image. These two branches are connected through a
self-consistent loss to guide the saliency detector to predict precise
boundaries while identifying salient objects. Experimental results on five
benchmarks demonstrate that our method outperforms the state-of-the-art weakly
supervised SOD methods and further narrows the gap with the fully supervised
methods.
Related papers
- Semi-supervised Open-World Object Detection [74.95267079505145]
We introduce a more realistic formulation, named semi-supervised open-world detection (SS-OWOD)
We demonstrate that the performance of the state-of-the-art OWOD detector dramatically deteriorates in the proposed SS-OWOD setting.
Our experiments on 4 datasets including MS COCO, PASCAL, Objects365 and DOTA demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-02-25T07:12:51Z) - SOOD: Towards Semi-Supervised Oriented Object Detection [57.05141794402972]
This paper proposes a novel Semi-supervised Oriented Object Detection model, termed SOOD, built upon the mainstream pseudo-labeling framework.
Our experiments show that when trained with the two proposed losses, SOOD surpasses the state-of-the-art SSOD methods under various settings on the DOTA-v1.5 benchmark.
arXiv Detail & Related papers (2023-04-10T11:10:42Z) - Open-Set Semi-Supervised Object Detection [43.464223594166654]
Recent developments for Semi-Supervised Object Detection (SSOD) have shown the promise of leveraging unlabeled data to improve an object detector.
We consider a more practical yet challenging problem, Open-Set Semi-Supervised Object Detection (OSSOD)
Our proposed framework effectively addresses the semantic expansion issue and shows consistent improvements on many OSSOD benchmarks.
arXiv Detail & Related papers (2022-08-29T17:04:30Z) - Unsupervised Domain Adaptive Salient Object Detection Through
Uncertainty-Aware Pseudo-Label Learning [104.00026716576546]
We propose to learn saliency from synthetic but clean labels, which naturally has higher pixel-labeling quality without the effort of manual annotations.
We show that our proposed method outperforms the existing state-of-the-art deep unsupervised SOD methods on several benchmark datasets.
arXiv Detail & Related papers (2022-02-26T16:03:55Z) - Scribble-based Boundary-aware Network for Weakly Supervised Salient
Object Detection in Remote Sensing Images [10.628932392896374]
We propose a novel weakly-supervised salient object detection framework to predict the saliency of remote sensing images from sparse scribble annotations.
Specifically, we design a boundary-aware module (BAM) to explore the object boundary semantics, which is explicitly supervised by the high-confidence object boundary (pseudo) labels.
Then, the boundary semantics are integrated with high-level features to guide the salient object detection under the supervision of scribble labels.
arXiv Detail & Related papers (2022-02-07T20:32:21Z) - Boundary Guided Context Aggregation for Semantic Segmentation [23.709865471981313]
We exploit boundary as a significant guidance for context aggregation to promote the overall semantic understanding of an image.
We conduct extensive experiments on the Cityscapes and ADE20K databases, and comparable results are achieved with the state-of-the-art methods.
arXiv Detail & Related papers (2021-10-27T17:04:38Z) - Self-Supervised Object Detection via Generative Image Synthesis [106.65384648377349]
We present the first end-to-end analysis-by synthesis framework with controllable GANs for the task of self-supervised object detection.
We use collections of real world images without bounding box annotations to learn to synthesize and detect objects.
Our work advances the field of self-supervised object detection by introducing a successful new paradigm of using controllable GAN-based image synthesis for it.
arXiv Detail & Related papers (2021-10-19T11:04:05Z) - Saliency Detection via Global Context Enhanced Feature Fusion and Edge
Weighted Loss [6.112591965159383]
We propose a context fusion decoder network (CFDN) and near edge weighted loss (NEWLoss) function.
The CFDN creates an accurate saliency map by integrating global context information and thus suppressing the influence of the unnecessary spatial information.
NewLoss accelerates learning of obscure boundaries without additional modules by generating weight maps on object boundaries.
arXiv Detail & Related papers (2021-10-13T08:04:55Z) - Unsupervised Object Detection with LiDAR Clues [70.73881791310495]
We present the first practical method for unsupervised object detection with the aid of LiDAR clues.
In our approach, candidate object segments based on 3D point clouds are firstly generated.
Then, an iterative segment labeling process is conducted to assign segment labels and to train a segment labeling network.
The labeling process is carefully designed so as to mitigate the issue of long-tailed and open-ended distribution.
arXiv Detail & Related papers (2020-11-25T18:59:54Z) - Weakly-Supervised Salient Object Detection via Scribble Annotations [54.40518383782725]
We propose a weakly-supervised salient object detection model to learn saliency from scribble labels.
We present a new metric, termed saliency structure measure, to measure the structure alignment of the predicted saliency maps.
Our method not only outperforms existing weakly-supervised/unsupervised methods, but also is on par with several fully-supervised state-of-the-art models.
arXiv Detail & Related papers (2020-03-17T12:59:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.