Memory-aided Contrastive Consensus Learning for Co-salient Object
Detection
- URL: http://arxiv.org/abs/2302.14485v1
- Date: Tue, 28 Feb 2023 10:58:01 GMT
- Title: Memory-aided Contrastive Consensus Learning for Co-salient Object
Detection
- Authors: Peng Zheng, Jie Qin, Shuo Wang, Tian-Zhu Xiang, Huan Xiong
- Abstract summary: Co-Salient Object Detection (CoSOD) aims at detecting common salient objects within a group of relevant source images.
We propose a novel Memory-aided Contrastive Consensus Learning framework, which is capable of effectively detecting co-salient objects in real time.
Experiments on all the latest CoSOD benchmarks demonstrate that our lite MCCL outperforms 13 cutting-edge models.
- Score: 30.92094260367798
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Co-Salient Object Detection (CoSOD) aims at detecting common salient objects
within a group of relevant source images. Most of the latest works employ the
attention mechanism for finding common objects. To achieve accurate CoSOD
results with high-quality maps and high efficiency, we propose a novel
Memory-aided Contrastive Consensus Learning (MCCL) framework, which is capable
of effectively detecting co-salient objects in real time (~110 fps). To learn
better group consensus, we propose the Group Consensus Aggregation Module
(GCAM) to abstract the common features of each image group; meanwhile, to make
the consensus representation more discriminative, we introduce the Memory-based
Contrastive Module (MCM), which saves and updates the consensus of images from
different groups in a queue of memories. Finally, to improve the quality and
integrity of the predicted maps, we develop an Adversarial Integrity Learning
(AIL) strategy to make the segmented regions more likely composed of complete
objects with less surrounding noise. Extensive experiments on all the latest
CoSOD benchmarks demonstrate that our lite MCCL outperforms 13 cutting-edge
models, achieving the new state of the art (~5.9% and ~6.2% improvement in
S-measure on CoSOD3k and CoSal2015, respectively). Our source codes, saliency
maps, and online demos are publicly available at
https://github.com/ZhengPeng7/MCCL.
Related papers
- Collaborative Camouflaged Object Detection: A Large-Scale Dataset and
Benchmark [8.185431179739945]
We study a new task called collaborative camouflaged object detection (CoCOD)
CoCOD aims to simultaneously detect camouflaged objects with the same properties from a group of relevant images.
We construct the first large-scale dataset, termed CoCOD8K, which consists of 8,528 high-quality and elaborately selected images.
arXiv Detail & Related papers (2023-10-06T13:51:46Z) - COMNet: Co-Occurrent Matching for Weakly Supervised Semantic
Segmentation [13.244183864948848]
We propose a novel Co-Occurrent Matching Network (COMNet), which can promote the quality of the CAMs and enforce the network to pay attention to the entire parts of objects.
Specifically, we perform inter-matching on paired images that contain common classes to enhance the corresponded areas, and construct intra-matching on a single image to propagate the semantic features across the object regions.
The experiments on the Pascal VOC 2012 and MS-COCO datasets show that our network can effectively boost the performance of the baseline model and achieve new state-of-the-art performance.
arXiv Detail & Related papers (2023-09-29T03:55:24Z) - De-coupling and De-positioning Dense Self-supervised Learning [65.56679416475943]
Dense Self-Supervised Learning (SSL) methods address the limitations of using image-level feature representations when handling images with multiple objects.
We show that they suffer from coupling and positional bias, which arise from the receptive field increasing with layer depth and zero-padding.
We demonstrate the benefits of our method on COCO and on a new challenging benchmark, OpenImage-MINI, for object classification, semantic segmentation, and object detection.
arXiv Detail & Related papers (2023-03-29T18:07:25Z) - GCoNet+: A Stronger Group Collaborative Co-Salient Object Detector [156.43671738038657]
We present a novel end-to-end group collaborative learning network, termed GCoNet+.
GCoNet+ can effectively and efficiently identify co-salient objects in natural scenes.
arXiv Detail & Related papers (2022-05-30T23:49:19Z) - Global-and-Local Collaborative Learning for Co-Salient Object Detection [162.62642867056385]
The goal of co-salient object detection (CoSOD) is to discover salient objects that commonly appear in a query group containing two or more relevant images.
We propose a global-and-local collaborative learning architecture, which includes a global correspondence modeling (GCM) and a local correspondence modeling (LCM)
The proposed GLNet is evaluated on three prevailing CoSOD benchmark datasets, demonstrating that our model trained on a small dataset (about 3k images) still outperforms eleven state-of-the-art competitors trained on some large datasets (about 8k-200k images)
arXiv Detail & Related papers (2022-04-19T14:32:41Z) - A Unified Transformer Framework for Group-based Segmentation:
Co-Segmentation, Co-Saliency Detection and Video Salient Object Detection [59.21990697929617]
Humans tend to mine objects by learning from a group of images or several frames of video since we live in a dynamic world.
Previous approaches design different networks on similar tasks separately, and they are difficult to apply to each other.
We introduce a unified framework to tackle these issues, term as UFO (UnifiedObject Framework for Co-Object Framework)
arXiv Detail & Related papers (2022-03-09T13:35:19Z) - Re-thinking Co-Salient Object Detection [170.44471050548827]
Co-salient object detection (CoSOD) aims to detect the co-occurring salient objects in a group of images.
Existing CoSOD datasets often have a serious data bias, assuming that each group of images contains salient objects of similar visual appearances.
We introduce a new benchmark, called CoSOD3k in the wild, which requires a large amount of semantic context.
arXiv Detail & Related papers (2020-07-07T12:20:51Z) - Gradient-Induced Co-Saliency Detection [81.54194063218216]
Co-saliency detection (Co-SOD) aims to segment the common salient foreground in a group of relevant images.
In this paper, inspired by human behavior, we propose a gradient-induced co-saliency detection method.
arXiv Detail & Related papers (2020-04-28T08:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.