Pose Magic: Efficient and Temporally Consistent Human Pose Estimation with a Hybrid Mamba-GCN Network
- URL: http://arxiv.org/abs/2408.02922v3
- Date: Wed, 26 Feb 2025 03:17:49 GMT
- Title: Pose Magic: Efficient and Temporally Consistent Human Pose Estimation with a Hybrid Mamba-GCN Network
- Authors: Xinyi Zhang, Qiqi Bao, Qinpeng Cui, Wenming Yang, Qingmin Liao,
- Abstract summary: We propose a new attention-free hybrid architecture named Hybrid Mamba-GCN (Pose Magic)<n>By adaptively fusing representations from Mamba and GCN, Pose Magic demonstrates superior capability in learning the underlying 3D structure.<n>Experiments show that Pose Magic achieves new SOTA results while saving $74.1%$ FLOPs.
- Score: 40.123744788977525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current state-of-the-art (SOTA) methods in 3D Human Pose Estimation (HPE) are primarily based on Transformers. However, existing Transformer-based 3D HPE backbones often encounter a trade-off between accuracy and computational efficiency. To resolve the above dilemma, in this work, we leverage recent advances in state space models and utilize Mamba for high-quality and efficient long-range modeling. Nonetheless, Mamba still faces challenges in precisely exploiting local dependencies between joints. To address these issues, we propose a new attention-free hybrid spatiotemporal architecture named Hybrid Mamba-GCN (Pose Magic). This architecture introduces local enhancement with GCN by capturing relationships between neighboring joints, thus producing new representations to complement Mamba's outputs. By adaptively fusing representations from Mamba and GCN, Pose Magic demonstrates superior capability in learning the underlying 3D structure. To meet the requirements of real-time inference, we also provide a fully causal version. Extensive experiments show that Pose Magic achieves new SOTA results ($\downarrow 0.9 mm$) while saving $74.1\%$ FLOPs. In addition, Pose Magic exhibits optimal motion consistency and the ability to generalize to unseen sequence lengths.
Related papers
- Efficient Spiking Point Mamba for Point Cloud Analysis [7.098060453549459]
Spiking Neural Networks (SNNs) provide an energy-efficient way to extract 3D-temporal features.
We propose Spiking Point Mamba (SPM), the first Mamba-based SNN in the 3D domain.
arXiv Detail & Related papers (2025-04-19T18:14:35Z) - HGMamba: Enhancing 3D Human Pose Estimation with a HyperGCN-Mamba Network [0.0]
3D human pose is a promising research area that leverages estimated and ground-truth 2D human pose data for training.
Existing approaches aim to enhance the performance of estimated 2D poses, but struggle when applied to ground-truth 2D pose data.
We propose a novel Hyper-GCN and Shuffle Mamba block, which processes input data through two parallel streams.
arXiv Detail & Related papers (2025-04-09T07:28:19Z) - HiSTF Mamba: Hierarchical Spatiotemporal Fusion with Multi-Granular Body-Spatial Modeling for High-Fidelity Text-to-Motion Generation [11.63340847947103]
We propose a novel HiSTF Mamba framework for text-to-motion generation.
We show that HiSTF Mamba achieves state-of-the-art performance across multiple metrics.
These findings validate the effectiveness of HiSTF Mamba in achieving high fidelity and strong semantic alignment.
arXiv Detail & Related papers (2025-03-10T04:01:48Z) - MatIR: A Hybrid Mamba-Transformer Image Restoration Model [95.17418386046054]
We propose a Mamba-Transformer hybrid image restoration model called MatIR.
MatIR cross-cycles the blocks of the Transformer layer and the Mamba layer to extract features.
In the Mamba module, we introduce the Image Inpainting State Space (IRSS) module, which traverses along four scan paths.
arXiv Detail & Related papers (2025-01-30T14:55:40Z) - Mamba-SEUNet: Mamba UNet for Monaural Speech Enhancement [54.427965535613886]
Mamba, as a novel state-space model (SSM), has gained widespread application in natural language processing and computer vision.
In this work, we introduce Mamba-SEUNet, an innovative architecture that integrates Mamba with U-Net for SE tasks.
arXiv Detail & Related papers (2024-12-21T13:43:51Z) - MobileMamba: Lightweight Multi-Receptive Visual Mamba Network [51.33486891724516]
Previous research on lightweight models has primarily focused on CNNs and Transformer-based designs.
We propose the MobileMamba framework, which balances efficiency and performance.
MobileMamba achieves up to 83.6% on Top-1, surpassing existing state-of-the-art methods.
arXiv Detail & Related papers (2024-11-24T18:01:05Z) - MAP: Unleashing Hybrid Mamba-Transformer Vision Backbone's Potential with Masked Autoregressive Pretraining [23.37555991996508]
We propose Masked Autoregressive Pretraining (MAP) to pretrain a hybrid Mamba-Transformer vision backbone network.
We show that both the pure Mamba architecture and the hybrid Mamba-Transformer vision backbone network pretrained with MAP significantly outperform other pretraining strategies.
arXiv Detail & Related papers (2024-10-01T17:05:08Z) - Bidirectional Gated Mamba for Sequential Recommendation [56.85338055215429]
Mamba, a recent advancement, has exhibited exceptional performance in time series prediction.
We introduce a new framework named Selective Gated Mamba ( SIGMA) for Sequential Recommendation.
Our results indicate that SIGMA outperforms current models on five real-world datasets.
arXiv Detail & Related papers (2024-08-21T09:12:59Z) - MambaMIM: Pre-training Mamba with State Space Token-interpolation [14.343466340528687]
We introduce a generative self-supervised learning method for Mamba (MambaMIM) based on Selective Structure State Space Sequence Token-interpolation (S6T)
MambaMIM can be used on any single or hybrid Mamba architectures to enhance the Mamba long-range representation capability.
arXiv Detail & Related papers (2024-08-15T10:35:26Z) - Hamba: Single-view 3D Hand Reconstruction with Graph-guided Bi-Scanning Mamba [48.45301469664908]
3D Hand reconstruction from a single RGB image is challenging due to the articulated motion, self-occlusion, and interaction with objects.
Existing SOTA methods employ attention-based transformers to learn the 3D hand pose and shape.
We propose a novel graph-guided Mamba framework, named Hamba, which bridges graph learning and state space modeling.
arXiv Detail & Related papers (2024-07-12T19:04:58Z) - MambaVision: A Hybrid Mamba-Transformer Vision Backbone [54.965143338206644]
We propose a novel hybrid Mamba-Transformer backbone, denoted as MambaVision, which is specifically tailored for vision applications.
Our core contribution includes redesigning the Mamba formulation to enhance its capability for efficient modeling of visual features.
We conduct a comprehensive ablation study on the feasibility of integrating Vision Transformers (ViT) with Mamba.
arXiv Detail & Related papers (2024-07-10T23:02:45Z) - Simba: Mamba augmented U-ShiftGCN for Skeletal Action Recognition in Videos [3.8366697175402225]
Skeleton Action Recognition involves identifying human actions using skeletal joint coordinates and their interconnections.
Recently, a novel selective state space model, Mamba, has surfaced as a compelling alternative to the attention mechanism in Transformers.
We present the first SAR framework incorporating Mamba, which attains state-of-the-art performance across three well-known benchmark skeleton action recognition datasets.
arXiv Detail & Related papers (2024-04-11T11:07:57Z) - Gamba: Marry Gaussian Splatting with Mamba for single view 3D reconstruction [153.52406455209538]
Gamba is an end-to-end 3D reconstruction model from a single-view image.
It completes reconstruction within 0.05 seconds on a single NVIDIA A100 GPU.
arXiv Detail & Related papers (2024-03-27T17:40:14Z) - SegMamba: Long-range Sequential Modeling Mamba For 3D Medical Image Segmentation [16.476244833079182]
We introduce SegMamba, a novel 3D medical image textbfSegmentation textbfMamba model.
SegMamba excels in whole volume feature modeling from a state space model standpoint.
Experiments on the BraTS2023 dataset demonstrate the effectiveness and efficiency of our SegMamba.
arXiv Detail & Related papers (2024-01-24T16:17:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.