Concealed Object Segmentation with Hierarchical Coherence Modeling
- URL: http://arxiv.org/abs/2401.11767v1
- Date: Mon, 22 Jan 2024 09:02:52 GMT
- Title: Concealed Object Segmentation with Hierarchical Coherence Modeling
- Authors: Fengyang Xiao, Pan Zhang, Chunming He, Runze Hu, Yutao Liu
- Abstract summary: We propose a Hierarchical Coherence Modeling (HCM) segmenter for concealed object segmentation (COS)
HCM promotes feature coherence by leveraging the intra-stage coherence and cross-stage coherence modules.
We also introduce the reversible re-calibration decoder to detect previously undetected parts in low-confidence regions.
- Score: 9.185195569812667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Concealed object segmentation (COS) is a challenging task that involves
localizing and segmenting those concealed objects that are visually blended
with their surrounding environments. Despite achieving remarkable success,
existing COS segmenters still struggle to achieve complete segmentation results
in extremely concealed scenarios. In this paper, we propose a Hierarchical
Coherence Modeling (HCM) segmenter for COS, aiming to address this incomplete
segmentation limitation. In specific, HCM promotes feature coherence by
leveraging the intra-stage coherence and cross-stage coherence modules,
exploring feature correlations at both the single-stage and contextual levels.
Additionally, we introduce the reversible re-calibration decoder to detect
previously undetected parts in low-confidence regions, resulting in further
enhancing segmentation performance. Extensive experiments conducted on three
COS tasks, including camouflaged object detection, polyp image segmentation,
and transparent object detection, demonstrate the promising results achieved by
the proposed HCM segmenter.
Related papers
- A Bottom-Up Approach to Class-Agnostic Image Segmentation [4.086366531569003]
We present a novel bottom-up formulation for addressing the class-agnostic segmentation problem.
We supervise our network directly on the projective sphere of its feature space.
Our bottom-up formulation exhibits exceptional generalization capability, even when trained on datasets designed for class-based segmentation.
arXiv Detail & Related papers (2024-09-20T17:56:02Z) - EAGLE: Eigen Aggregation Learning for Object-Centric Unsupervised Semantic Segmentation [5.476136494434766]
We introduce EiCue, a technique providing semantic and structural cues through an eigenbasis derived from semantic similarity matrix.
We guide our model to learn object-level representations with intra- and inter-image object-feature consistency.
Experiments on COCO-Stuff, Cityscapes, and Potsdam-3 datasets demonstrate the state-of-the-art USS results.
arXiv Detail & Related papers (2024-03-03T11:24:16Z) - Spatial Structure Constraints for Weakly Supervised Semantic
Segmentation [100.0316479167605]
A class activation map (CAM) can only locate the most discriminative part of objects.
We propose spatial structure constraints (SSC) for weakly supervised semantic segmentation to alleviate the unwanted object over-activation of attention expansion.
Our approach achieves 72.7% and 47.0% mIoU on the PASCAL VOC 2012 and COCO datasets, respectively.
arXiv Detail & Related papers (2024-01-20T05:25:25Z) - COMNet: Co-Occurrent Matching for Weakly Supervised Semantic
Segmentation [13.244183864948848]
We propose a novel Co-Occurrent Matching Network (COMNet), which can promote the quality of the CAMs and enforce the network to pay attention to the entire parts of objects.
Specifically, we perform inter-matching on paired images that contain common classes to enhance the corresponded areas, and construct intra-matching on a single image to propagate the semantic features across the object regions.
The experiments on the Pascal VOC 2012 and MS-COCO datasets show that our network can effectively boost the performance of the baseline model and achieve new state-of-the-art performance.
arXiv Detail & Related papers (2023-09-29T03:55:24Z) - SegGPT Meets Co-Saliency Scene [88.53031109255595]
We first design a framework to enable SegGPT for the problem of co-salient object detection.
Proceed to the next step, we evaluate the performance of SegGPT on the problem of co-salient object detection on three available datasets.
We achieve a finding that co-saliency scenes challenges SegGPT due to context discrepancy within a group of co-saliency images.
arXiv Detail & Related papers (2023-05-08T00:19:05Z) - Framework-agnostic Semantically-aware Global Reasoning for Segmentation [29.69187816377079]
We propose a component that learns to project image features into latent representations and reason between them.
Our design encourages the latent regions to represent semantic concepts by ensuring that the activated regions are spatially disjoint.
Our latent tokens are semantically interpretable and diverse and provide a rich set of features that can be transferred to downstream tasks.
arXiv Detail & Related papers (2022-12-06T21:42:05Z) - Saliency Guided Inter- and Intra-Class Relation Constraints for Weakly
Supervised Semantic Segmentation [66.87777732230884]
We propose a saliency guided Inter- and Intra-Class Relation Constrained (I$2$CRC) framework to assist the expansion of the activated object regions.
We also introduce an object guided label refinement module to take a full use of both the segmentation prediction and the initial labels for obtaining superior pseudo-labels.
arXiv Detail & Related papers (2022-06-20T03:40:56Z) - Deep Spectral Methods: A Surprisingly Strong Baseline for Unsupervised
Semantic Segmentation and Localization [98.46318529630109]
We take inspiration from traditional spectral segmentation methods by reframing image decomposition as a graph partitioning problem.
We find that these eigenvectors already decompose an image into meaningful segments, and can be readily used to localize objects in a scene.
By clustering the features associated with these segments across a dataset, we can obtain well-delineated, nameable regions.
arXiv Detail & Related papers (2022-05-16T17:47:44Z) - Beyond the Prototype: Divide-and-conquer Proxies for Few-shot
Segmentation [63.910211095033596]
Few-shot segmentation aims to segment unseen-class objects given only a handful of densely labeled samples.
We propose a simple yet versatile framework in the spirit of divide-and-conquer.
Our proposed approach, named divide-and-conquer proxies (DCP), allows for the development of appropriate and reliable information.
arXiv Detail & Related papers (2022-04-21T06:21:14Z) - Gradient-Induced Co-Saliency Detection [81.54194063218216]
Co-saliency detection (Co-SOD) aims to segment the common salient foreground in a group of relevant images.
In this paper, inspired by human behavior, we propose a gradient-induced co-saliency detection method.
arXiv Detail & Related papers (2020-04-28T08:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.