Structure-Consistent Weakly Supervised Salient Object Detection with
Local Saliency Coherence
- URL: http://arxiv.org/abs/2012.04404v2
- Date: Wed, 9 Dec 2020 03:22:46 GMT
- Title: Structure-Consistent Weakly Supervised Salient Object Detection with
Local Saliency Coherence
- Authors: Siyue Yu, Bingfeng Zhang, Jimin Xiao, Eng Gee Lim
- Abstract summary: We propose a one-round end-to-end training approach for weakly supervised salient object detection via scribble annotations.
Our method achieves a new state-of-the-art performance on six benchmarks.
- Score: 14.79639149658596
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Sparse labels have been attracting much attention in recent years. However,
the performance gap between weakly supervised and fully supervised salient
object detection methods is huge, and most previous weakly supervised works
adopt complex training methods with many bells and whistles. In this work, we
propose a one-round end-to-end training approach for weakly supervised salient
object detection via scribble annotations without pre/post-processing
operations or extra supervision data. Since scribble labels fail to offer
detailed salient regions, we propose a local coherence loss to propagate the
labels to unlabeled regions based on image features and pixel distance, so as
to predict integral salient regions with complete object structures. We design
a saliency structure consistency loss as self-consistent mechanism to ensure
consistent saliency maps are predicted with different scales of the same image
as input, which could be viewed as a regularization technique to enhance the
model generalization ability. Additionally, we design an aggregation module
(AGGM) to better integrate high-level features, low-level features and global
context information for the decoder to aggregate various information. Extensive
experiments show that our method achieves a new state-of-the-art performance on
six benchmarks (e.g. for the ECSSD dataset: F_\beta = 0.8995, E_\xi = 0.9079
and MAE = 0.0489$), with an average gain of 4.60\% for F-measure, 2.05\% for
E-measure and 1.88\% for MAE over the previous best method on this task. Source
code is available at http://github.com/siyueyu/SCWSSOD.
Related papers
- SOOD++: Leveraging Unlabeled Data to Boost Oriented Object Detection [59.868772767818975]
We propose a simple yet effective Semi-supervised Oriented Object Detection method termed SOOD++.
Specifically, we observe that objects from aerial images are usually arbitrary orientations, small scales, and aggregation.
Extensive experiments conducted on various multi-oriented object datasets under various labeled settings demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2024-07-01T07:03:51Z) - Towards the Uncharted: Density-Descending Feature Perturbation for Semi-supervised Semantic Segmentation [51.66997548477913]
We propose a novel feature-level consistency learning framework named Density-Descending Feature Perturbation (DDFP)
Inspired by the low-density separation assumption in semi-supervised learning, our key insight is that feature density can shed a light on the most promising direction for the segmentation classifier to explore.
The proposed DDFP outperforms other designs on feature-level perturbations and shows state of the art performances on both Pascal VOC and Cityscapes dataset.
arXiv Detail & Related papers (2024-03-11T06:59:05Z) - Background Activation Suppression for Weakly Supervised Object
Localization and Semantic Segmentation [84.62067728093358]
Weakly supervised object localization and semantic segmentation aim to localize objects using only image-level labels.
New paradigm has emerged by generating a foreground prediction map to achieve pixel-level localization.
This paper presents two astonishing experimental observations on the object localization learning process.
arXiv Detail & Related papers (2023-09-22T15:44:10Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - SOOD: Towards Semi-Supervised Oriented Object Detection [57.05141794402972]
This paper proposes a novel Semi-supervised Oriented Object Detection model, termed SOOD, built upon the mainstream pseudo-labeling framework.
Our experiments show that when trained with the two proposed losses, SOOD surpasses the state-of-the-art SSOD methods under various settings on the DOTA-v1.5 benchmark.
arXiv Detail & Related papers (2023-04-10T11:10:42Z) - A Visual Representation-guided Framework with Global Affinity for Weakly
Supervised Salient Object Detection [8.823804648745487]
We propose a framework guided by general visual representations with rich contextual semantic knowledge for scribble-based SOD.
These general visual representations are generated by self-supervised learning based on large-scale unlabeled datasets.
Our method achieves comparable or even superior performance to the state-of-the-art fully supervised models.
arXiv Detail & Related papers (2023-02-21T14:31:57Z) - OAMatcher: An Overlapping Areas-based Network for Accurate Local Feature
Matching [9.006654114778073]
We propose OAMatcher, a detector-free method that imitates humans behavior to generate dense and accurate matches.
OAMatcher predicts overlapping areas to promote effective and clean global context aggregation.
Comprehensive experiments demonstrate that OAMatcher outperforms the state-of-the-art methods on several benchmarks.
arXiv Detail & Related papers (2023-02-12T03:32:45Z) - Single-Stage Open-world Instance Segmentation with Cross-task
Consistency Regularization [33.434628514542375]
Open-world instance segmentation aims to segment class-agnostic instances from images.
This paper proposes a single-stage framework to produce a mask for each instance directly.
We show that the proposed method can achieve impressive results in both fully-supervised and semi-supervised settings.
arXiv Detail & Related papers (2022-08-18T18:55:09Z) - Weakly-Supervised Salient Object Detection Using Point Supervison [17.88596733603456]
Current state-of-the-art saliency detection models rely heavily on large datasets of accurate pixel-wise annotations.
We propose a novel weakly-supervised salient object detection method using point supervision.
Our method outperforms the previous state-of-the-art methods trained with the stronger supervision.
arXiv Detail & Related papers (2022-03-22T12:16:05Z) - Weakly-Supervised Salient Object Detection via Scribble Annotations [54.40518383782725]
We propose a weakly-supervised salient object detection model to learn saliency from scribble labels.
We present a new metric, termed saliency structure measure, to measure the structure alignment of the predicted saliency maps.
Our method not only outperforms existing weakly-supervised/unsupervised methods, but also is on par with several fully-supervised state-of-the-art models.
arXiv Detail & Related papers (2020-03-17T12:59:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.