Hybrid Mamba for Few-Shot Segmentation
- URL: http://arxiv.org/abs/2409.19613v1
- Date: Sun, 29 Sep 2024 08:51:14 GMT
- Title: Hybrid Mamba for Few-Shot Segmentation
- Authors: Qianxiong Xu, Xuanyi Liu, Lanyun Zhu, Guosheng Lin, Cheng Long, Ziyue Li, Rui Zhao,
- Abstract summary: Few-shot segmentation (FSS) methods use cross attention to fuse support foreground (FG) into query features, regardless of the quadratic complexity.
We aim to devise a cross (attention-like) Mamba to capture inter-sequence dependencies for FSS.
A simple idea is to scan on support features to selectively compress them into the hidden state, which is then used as the initial hidden state to sequentially scan query features.
- Score: 54.562050590453225
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many few-shot segmentation (FSS) methods use cross attention to fuse support foreground (FG) into query features, regardless of the quadratic complexity. A recent advance Mamba can also well capture intra-sequence dependencies, yet the complexity is only linear. Hence, we aim to devise a cross (attention-like) Mamba to capture inter-sequence dependencies for FSS. A simple idea is to scan on support features to selectively compress them into the hidden state, which is then used as the initial hidden state to sequentially scan query features. Nevertheless, it suffers from (1) support forgetting issue: query features will also gradually be compressed when scanning on them, so the support features in hidden state keep reducing, and many query pixels cannot fuse sufficient support features; (2) intra-class gap issue: query FG is essentially more similar to itself rather than to support FG, i.e., query may prefer not to fuse support features but their own ones from the hidden state, yet the success of FSS relies on the effective use of support information. To tackle them, we design a hybrid Mamba network (HMNet), including (1) a support recapped Mamba to periodically recap the support features when scanning query, so the hidden state can always contain rich support information; (2) a query intercepted Mamba to forbid the mutual interactions among query pixels, and encourage them to fuse more support features from the hidden state. Consequently, the support information is better utilized, leading to better performance. Extensive experiments have been conducted on two public benchmarks, showing the superiority of HMNet. The code is available at https://github.com/Sam1224/HMNet.
Related papers
- Recurrent Feature Mining and Keypoint Mixup Padding for Category-Agnostic Pose Estimation [33.204232825380394]
Category-agnostic pose estimation aims to locate keypoints on query images according to a few annotated support images for arbitrary novel classes.
We propose a novel yet concise framework, which recurrently mines FGSA features from both support and query images.
arXiv Detail & Related papers (2025-03-27T04:09:13Z) - Overcoming Support Dilution for Robust Few-shot Semantic Segmentation [97.87058176900179]
Few-shot Semantic (FSS) is a challenging task that utilizes limited support images to segment associated unseen objects in query images.
Recent FSS methods are observed to perform worse, when enlarging the number of shots.
In this work, we study this challenging issue, called support dilution, our goal is to recognize, select, preserve, and enhance those high-contributed supports in the raw support pool.
arXiv Detail & Related papers (2025-01-23T10:26:48Z) - Eliminating Feature Ambiguity for Few-Shot Segmentation [95.9916573435427]
Recent advancements in few-shot segmentation (FSS) have exploited pixel-by-pixel matching between query and support features.
This paper presents a novel plug-in termed ambiguity elimination network (AENet), which can be plugged into any existing cross attention-based FSS methods.
arXiv Detail & Related papers (2024-07-13T10:33:03Z) - Fusion-Mamba for Cross-modality Object Detection [63.56296480951342]
Cross-modality fusing information from different modalities effectively improves object detection performance.
We design a Fusion-Mamba block (FMB) to map cross-modal features into a hidden state space for interaction.
Our proposed approach outperforms the state-of-the-art methods on $m$AP with 5.9% on $M3FD$ and 4.9% on FLIR-Aligned datasets.
arXiv Detail & Related papers (2024-04-14T05:28:46Z) - Self-Calibrated Cross Attention Network for Few-Shot Segmentation [65.20559109791756]
We design a self-calibrated cross attention (SCCA) block for efficient patch-based attention.
SCCA groups the patches from the same query image and the aligned patches from the support image as K&V.
In this way, the query BG features are fused with matched BG features in support FG, and thus the aforementioned issues will be mitigated.
arXiv Detail & Related papers (2023-08-18T04:41:50Z) - HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot
Action Recognition [51.2715005161475]
We propose a novel Hybrid Relation guided temporal Set Matching approach for few-shot action recognition.
The core idea of HyRSM++ is to integrate all videos within the task to learn discriminative representations.
We show that our method achieves state-of-the-art performance under various few-shot settings.
arXiv Detail & Related papers (2023-01-09T13:32:50Z) - MSI: Maximize Support-Set Information for Few-Shot Segmentation [27.459485560344262]
We present a novel method(MSI) which maximizes the support-set information by exploiting two complementary sources of features to generate super correlation maps.
Experimental results on several publicly available FSS benchmarks show that our proposed method consistently improves performance by visible margins and leads to faster convergence.
arXiv Detail & Related papers (2022-12-09T05:38:07Z) - Prototype as Query for Few Shot Semantic Segmentation [7.380266341356485]
Few-shot Semantic (FSS) was proposed to segment unseen classes in a query image, referring to only a few examples named support images.
We propose a framework built upon Transformer termed as ProtoFormer to fully capture spatial details in query features.
arXiv Detail & Related papers (2022-11-27T08:41:50Z) - Dynamic Prototype Convolution Network for Few-Shot Semantic Segmentation [33.93192093090601]
Key challenge for few-shot semantic segmentation (FSS) is how to tailor a desirable interaction among support and query features.
We propose a prototype prototype convolution network (DPCN) to fully capture the intrinsic details for accurate FSS.
Our DPCN is also flexible and efficient under the k-shot FSS setting.
arXiv Detail & Related papers (2022-04-22T11:12:37Z) - Few-Shot Segmentation via Cycle-Consistent Transformer [74.49307213431952]
We focus on utilizing pixel-wise relationships between support and target images to facilitate the few-shot semantic segmentation task.
We propose using a novel cycle-consistent attention mechanism to filter out possible harmful support features.
Our proposed CyCTR leads to remarkable improvement compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-06-04T07:57:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.