GCoNet+: A Stronger Group Collaborative Co-Salient Object Detector
- URL: http://arxiv.org/abs/2205.15469v4
- Date: Mon, 10 Apr 2023 14:24:31 GMT
- Title: GCoNet+: A Stronger Group Collaborative Co-Salient Object Detector
- Authors: Peng Zheng, Huazhu Fu, Deng-Ping Fan, Qi Fan, Jie Qin, Yu-Wing Tai,
Chi-Keung Tang and Luc Van Gool
- Abstract summary: We present a novel end-to-end group collaborative learning network, termed GCoNet+.
GCoNet+ can effectively and efficiently identify co-salient objects in natural scenes.
- Score: 156.43671738038657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present a novel end-to-end group collaborative learning
network, termed GCoNet+, which can effectively and efficiently (250 fps)
identify co-salient objects in natural scenes. The proposed GCoNet+ achieves
the new state-of-the-art performance for co-salient object detection (CoSOD)
through mining consensus representations based on the following two essential
criteria: 1) intra-group compactness to better formulate the consistency among
co-salient objects by capturing their inherent shared attributes using our
novel group affinity module (GAM); 2) inter-group separability to effectively
suppress the influence of noisy objects on the output by introducing our new
group collaborating module (GCM) conditioning on the inconsistent consensus. To
further improve the accuracy, we design a series of simple yet effective
components as follows: i) a recurrent auxiliary classification module (RACM)
promoting model learning at the semantic level; ii) a confidence enhancement
module (CEM) assisting the model in improving the quality of the final
predictions; and iii) a group-based symmetric triplet (GST) loss guiding the
model to learn more discriminative features. Extensive experiments on three
challenging benchmarks, i.e., CoCA, CoSOD3k, and CoSal2015, demonstrate that
our GCoNet+ outperforms the existing 12 cutting-edge models. Code has been
released at https://github.com/ZhengPeng7/GCoNet_plus.
Related papers
- Precision matters: Precision-aware ensemble for weakly supervised semantic segmentation [14.931551206723041]
Weakly Supervised Semantic (WSSS) employs weak supervision, such as image-level labels, to train the segmentation model.
We propose ORANDNet, an advanced ensemble approach tailored for WSSS.
arXiv Detail & Related papers (2024-06-28T03:58:02Z) - Memory-aided Contrastive Consensus Learning for Co-salient Object
Detection [30.92094260367798]
Co-Salient Object Detection (CoSOD) aims at detecting common salient objects within a group of relevant source images.
We propose a novel Memory-aided Contrastive Consensus Learning framework, which is capable of effectively detecting co-salient objects in real time.
Experiments on all the latest CoSOD benchmarks demonstrate that our lite MCCL outperforms 13 cutting-edge models.
arXiv Detail & Related papers (2023-02-28T10:58:01Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - Generalised Co-Salient Object Detection [50.876864826216924]
We propose a new setting that relaxes an assumption in the conventional Co-Salient Object Detection (CoSOD) setting.
We call this new setting Generalised Co-Salient Object Detection (GCoSOD)
We propose a novel random sampling based Generalised CoSOD Training (GCT) strategy to distill the awareness of inter-image absence of co-salient objects into CoSOD models.
arXiv Detail & Related papers (2022-08-20T12:23:32Z) - A Unified Two-Stage Group Semantics Propagation and Contrastive Learning
Network for Co-Saliency Detection [11.101111632948394]
Two-stage grOup semantics PropagatIon and Contrastive learning NETwork (TopicNet) for CoSOD.
We present a unified Two-stage grOup semantics PropagatIon and Contrastive learning NETwork (TopicNet) for CoSOD.
arXiv Detail & Related papers (2022-08-13T10:14:50Z) - Entity-Graph Enhanced Cross-Modal Pretraining for Instance-level Product
Retrieval [152.3504607706575]
This research aims to conduct weakly-supervised multi-modal instance-level product retrieval for fine-grained product categories.
We first contribute the Product1M datasets, and define two real practical instance-level retrieval tasks.
We exploit to train a more effective cross-modal model which is adaptively capable of incorporating key concept information from the multi-modal data.
arXiv Detail & Related papers (2022-06-17T15:40:45Z) - Group Collaborative Learning for Co-Salient Object Detection [152.67721740487937]
We present a novel group collaborative learning framework (GCoNet) capable of detecting co-salient objects in real time (16ms)
Extensive experiments on three challenging benchmarks, i.e., CoCA, CoSOD3k, and Cosal2015, demonstrate that our simple GCoNet outperforms 10 cutting-edge models and achieves the new state-of-the-art.
arXiv Detail & Related papers (2021-03-15T13:16:03Z) - CoADNet: Collaborative Aggregation-and-Distribution Networks for
Co-Salient Object Detection [91.91911418421086]
Co-Salient Object Detection (CoSOD) aims at discovering salient objects that repeatedly appear in a given query group containing two or more relevant images.
One challenging issue is how to effectively capture co-saliency cues by modeling and exploiting inter-image relationships.
We present an end-to-end collaborative aggregation-and-distribution network (CoADNet) to capture both salient and repetitive visual patterns from multiple images.
arXiv Detail & Related papers (2020-11-10T04:28:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.