FastPoseGait: A Toolbox and Benchmark for Efficient Pose-based Gait
Recognition
- URL: http://arxiv.org/abs/2309.00794v1
- Date: Sat, 2 Sep 2023 02:05:58 GMT
- Title: FastPoseGait: A Toolbox and Benchmark for Efficient Pose-based Gait
Recognition
- Authors: Shibei Meng, Yang Fu, Saihui Hou, Chunshui Cao, Xu Liu, Yongzhen Huang
- Abstract summary: FastPoseGait is an open-source toolbox for pose-based gait recognition based on PyTorch.
Our toolbox supports a set of cutting-edge pose-based gait recognition algorithms and a variety of related benchmarks.
- Score: 11.985433662623036
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present FastPoseGait, an open-source toolbox for pose-based gait
recognition based on PyTorch. Our toolbox supports a set of cutting-edge
pose-based gait recognition algorithms and a variety of related benchmarks.
Unlike other pose-based projects that focus on a single algorithm, FastPoseGait
integrates several state-of-the-art (SOTA) algorithms under a unified
framework, incorporating both the latest advancements and best practices to
ease the comparison of effectiveness and efficiency. In addition, to promote
future research on pose-based gait recognition, we provide numerous pre-trained
models and detailed benchmark results, which offer valuable insights and serve
as a reference for further investigations. By leveraging the highly modular
structure and diverse methods offered by FastPoseGait, researchers can quickly
delve into pose-based gait recognition and promote development in the field. In
this paper, we outline various features of this toolbox, aiming that our
toolbox and benchmarks can further foster collaboration, facilitate
reproducibility, and encourage the development of innovative algorithms for
pose-based gait recognition. FastPoseGait is available at
https://github.com//BNU-IVC/FastPoseGait and is actively maintained. We will
continue updating this report as we add new features.
Related papers
- OpenGait: A Comprehensive Benchmark Study for Gait Recognition towards Better Practicality [11.64292241875791]
We first develop OpenGait, a flexible and efficient gait recognition platform.
Using OpenGait as a foundation, we conduct in-depth ablation experiments to revisit recent developments in gait recognition.
Inspired by these findings, we develop three structurally simple yet empirically powerful and practically robust baseline models.
arXiv Detail & Related papers (2024-05-15T07:11:12Z) - PYSKL: Towards Good Practices for Skeleton Action Recognition [77.87404524458809]
PYSKL is an open-source toolbox for skeleton-based action recognition based on PyTorch.
It implements six different algorithms under a unified framework to ease the comparison of efficacy and efficiency.
PYSKL supports the training and testing of nine skeleton-based action recognition benchmarks and achieves state-of-the-art recognition performance on eight of them.
arXiv Detail & Related papers (2022-05-19T09:58:32Z) - Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based
Baseline [95.88825497452716]
Gait benchmarks empower the research community to train and evaluate high-performance gait recognition systems.
GREW is the first large-scale dataset for gait recognition in the wild.
SPOSGait is the first NAS-based gait recognition model.
arXiv Detail & Related papers (2022-05-05T14:57:39Z) - Spatio-temporal Relation Modeling for Few-shot Action Recognition [100.3999454780478]
We propose a few-shot action recognition framework, STRM, which enhances class-specific featureriminability while simultaneously learning higher-order temporal representations.
Our approach achieves an absolute gain of 3.5% in classification accuracy, as compared to the best existing method in the literature.
arXiv Detail & Related papers (2021-12-09T18:59:14Z) - PoseDet: Fast Multi-Person Pose Estimation Using Pose Embedding [16.57620683425904]
This paper presents a novel framework PoseDet (Estimating Pose by Detection) to localize and associate body joints simultaneously.
We also propose the keypoint-aware pose embedding to represent an object in terms of the locations of its keypoints.
This simple framework achieves an unprecedented speed and a competitive accuracy on the COCO benchmark compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-07-22T05:54:00Z) - Guided Interactive Video Object Segmentation Using Reliability-Based
Attention Maps [55.94785248905853]
We propose a novel guided interactive segmentation (GIS) algorithm for video objects to improve the segmentation accuracy and reduce the interaction time.
We develop the intersection-aware propagation module to propagate segmentation results to neighboring frames.
Experimental results demonstrate that the proposed algorithm provides more accurate segmentation results at a faster speed than conventional algorithms.
arXiv Detail & Related papers (2021-04-21T07:08:57Z) - Centralized Information Interaction for Salient Object Detection [68.8587064889475]
The U-shape structure has shown its advantage in salient object detection for efficiently combining multi-scale features.
This paper shows that by centralizing these connections, we can achieve the cross-scale information interaction among them.
Our approach can cooperate with various existing U-shape-based salient object detection methods by substituting the connections between the bottom-up and top-down pathways.
arXiv Detail & Related papers (2020-12-21T12:42:06Z) - Image Matching across Wide Baselines: From Paper to Practice [80.9424750998559]
We introduce a comprehensive benchmark for local features and robust estimation algorithms.
Our pipeline's modular structure allows easy integration, configuration, and combination of different methods.
We show that with proper settings, classical solutions may still outperform the perceived state of the art.
arXiv Detail & Related papers (2020-03-03T15:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.