Generalised Co-Salient Object Detection
- URL: http://arxiv.org/abs/2208.09668v3
- Date: Fri, 11 Aug 2023 04:40:10 GMT
- Title: Generalised Co-Salient Object Detection
- Authors: Jiawei Liu, Jing Zhang, Ruikai Cui, Kaihao Zhang, Weihao Li, Nick
Barnes
- Abstract summary: We propose a new setting that relaxes an assumption in the conventional Co-Salient Object Detection (CoSOD) setting.
We call this new setting Generalised Co-Salient Object Detection (GCoSOD)
We propose a novel random sampling based Generalised CoSOD Training (GCT) strategy to distill the awareness of inter-image absence of co-salient objects into CoSOD models.
- Score: 50.876864826216924
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose a new setting that relaxes an assumption in the conventional
Co-Salient Object Detection (CoSOD) setting by allowing the presence of "noisy
images" which do not show the shared co-salient object. We call this new
setting Generalised Co-Salient Object Detection (GCoSOD). We propose a novel
random sampling based Generalised CoSOD Training (GCT) strategy to distill the
awareness of inter-image absence of co-salient objects into CoSOD models. It
employs a Diverse Sampling Self-Supervised Learning (DS3L) that, in addition to
the provided supervised co-salient label, introduces additional self-supervised
labels for noisy images (being null, that no co-salient object is present).
Further, the random sampling process inherent in GCT enables the generation of
a high-quality uncertainty map highlighting potential false-positive
predictions at instance level. To evaluate the performance of CoSOD models
under the GCoSOD setting, we propose two new testing datasets, namely
CoCA-Common and CoCA-Zero, where a common salient object is partially present
in the former and completely absent in the latter. Extensive experiments
demonstrate that our proposed method significantly improves the performance of
CoSOD models in terms of the performance under the GCoSOD setting as well as
the model calibration degrees.
Related papers
- Self-supervised co-salient object detection via feature correspondence at multiple scales [27.664016341526988]
This paper introduces a novel two-stage self-supervised approach for detecting co-occurring salient objects (CoSOD) in image groups without requiring segmentation annotations.
We train a self-supervised network that detects co-salient regions by computing local patch-level feature correspondences across images.
In experiments on three CoSOD benchmark datasets, our model outperforms the corresponding state-of-the-art models by a huge margin.
arXiv Detail & Related papers (2024-03-17T06:21:21Z) - Discriminative Consensus Mining with A Thousand Groups for More Accurate Co-Salient Object Detection [5.7834917194542035]
Co-Salient Object Detection (CoSOD) is a rapidly growing task, extended from Salient Object Detection (SOD) and Common Object (Co-Segmentation)
There is still no standard and efficient training set in CoSOD, which makes it chaotic to choose training sets in the recently proposed CoSOD methods.
In this thesis, a new CoSOD training set is introduced, named Co-Saliency of ImageNet (CoSINe) dataset.
arXiv Detail & Related papers (2024-01-15T06:02:24Z) - Weakly-supervised Contrastive Learning for Unsupervised Object Discovery [52.696041556640516]
Unsupervised object discovery is promising due to its ability to discover objects in a generic manner.
We design a semantic-guided self-supervised learning model to extract high-level semantic features from images.
We introduce Principal Component Analysis (PCA) to localize object regions.
arXiv Detail & Related papers (2023-07-07T04:03:48Z) - Towards Stable Co-saliency Detection and Object Co-segmentation [12.979401244603661]
We present a novel model for simultaneous stable co-saliency detection (CoSOD) and object co-segmentation (CoSEG)
We first propose a multi-path stable recurrent unit (MSRU), containing dummy orders mechanisms (DOM) and recurrent unit (RU)
Our proposed MSRU not only helps CoSOD (CoSEG) model captures robust inter-image relations, but also reduces order-sensitivity, resulting in a more stable inference and training process.
arXiv Detail & Related papers (2022-09-25T03:58:49Z) - GCoNet+: A Stronger Group Collaborative Co-Salient Object Detector [156.43671738038657]
We present a novel end-to-end group collaborative learning network, termed GCoNet+.
GCoNet+ can effectively and efficiently identify co-salient objects in natural scenes.
arXiv Detail & Related papers (2022-05-30T23:49:19Z) - Salient Objects in Clutter [130.63976772770368]
This paper identifies and addresses a serious design bias of existing salient object detection (SOD) datasets.
This design bias has led to a saturation in performance for state-of-the-art SOD models when evaluated on existing datasets.
We propose a new high-quality dataset and update the previous saliency benchmark.
arXiv Detail & Related papers (2021-05-07T03:49:26Z) - Re-thinking Co-Salient Object Detection [170.44471050548827]
Co-salient object detection (CoSOD) aims to detect the co-occurring salient objects in a group of images.
Existing CoSOD datasets often have a serious data bias, assuming that each group of images contains salient objects of similar visual appearances.
We introduce a new benchmark, called CoSOD3k in the wild, which requires a large amount of semantic context.
arXiv Detail & Related papers (2020-07-07T12:20:51Z) - Gradient-Induced Co-Saliency Detection [81.54194063218216]
Co-saliency detection (Co-SOD) aims to segment the common salient foreground in a group of relevant images.
In this paper, inspired by human behavior, we propose a gradient-induced co-saliency detection method.
arXiv Detail & Related papers (2020-04-28T08:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.