OpenGait: A Comprehensive Benchmark Study for Gait Recognition towards Better Practicality
- URL: http://arxiv.org/abs/2405.09138v1
- Date: Wed, 15 May 2024 07:11:12 GMT
- Title: OpenGait: A Comprehensive Benchmark Study for Gait Recognition towards Better Practicality
- Authors: Chao Fan, Saihui Hou, Junhao Liang, Chuanfu Shen, Jingzhe Ma, Dongyang Jin, Yongzhen Huang, Shiqi Yu,
- Abstract summary: We first develop OpenGait, a flexible and efficient gait recognition platform.
Using OpenGait as a foundation, we conduct in-depth ablation experiments to revisit recent developments in gait recognition.
Inspired by these findings, we develop three structurally simple yet empirically powerful and practically robust baseline models.
- Score: 11.64292241875791
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gait recognition, a rapidly advancing vision technology for person identification from a distance, has made significant strides in indoor settings. However, evidence suggests that existing methods often yield unsatisfactory results when applied to newly released real-world gait datasets. Furthermore, conclusions drawn from indoor gait datasets may not easily generalize to outdoor ones. Therefore, the primary goal of this work is to present a comprehensive benchmark study aimed at improving practicality rather than solely focusing on enhancing performance. To this end, we first develop OpenGait, a flexible and efficient gait recognition platform. Using OpenGait as a foundation, we conduct in-depth ablation experiments to revisit recent developments in gait recognition. Surprisingly, we detect some imperfect parts of certain prior methods thereby resulting in several critical yet undiscovered insights. Inspired by these findings, we develop three structurally simple yet empirically powerful and practically robust baseline models, i.e., DeepGaitV2, SkeletonGait, and SkeletonGait++, respectively representing the appearance-based, model-based, and multi-modal methodology for gait pattern description. Beyond achieving SoTA performances, more importantly, our careful exploration sheds new light on the modeling experience of deep gait models, the representational capacity of typical gait modalities, and so on. We hope this work can inspire further research and application of gait recognition towards better practicality. The code is available at https://github.com/ShiqiYu/OpenGait.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - BigGait: Learning Gait Representation You Want by Large Vision Models [12.620774996969535]
Existing gait recognition methods rely on task-specific upstream driven by supervised learning to provide explicit gait representations.
Escaping from this trend, this work proposes a simple yet efficient gait framework, termed BigGait.
BigGait transforms all-purpose knowledge into implicit gait representations without requiring third-party supervision signals.
arXiv Detail & Related papers (2024-02-29T13:00:22Z) - Human as Points: Explicit Point-based 3D Human Reconstruction from
Single-view RGB Images [78.56114271538061]
We introduce an explicit point-based human reconstruction framework called HaP.
Our approach is featured by fully-explicit point cloud estimation, manipulation, generation, and refinement in the 3D geometric space.
Our results may indicate a paradigm rollback to the fully-explicit and geometry-centric algorithm design.
arXiv Detail & Related papers (2023-11-06T05:52:29Z) - Distillation-guided Representation Learning for Unconstrained Gait Recognition [50.0533243584942]
We propose a framework, termed GAit DEtection and Recognition (GADER), for human authentication in challenging outdoor scenarios.
GADER builds discriminative features through a novel gait recognition method, where only frames containing gait information are used.
We evaluate our method on multiple State-of-The-Arts(SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets.
arXiv Detail & Related papers (2023-07-27T01:53:57Z) - Universal Domain Adaptation from Foundation Models: A Baseline Study [58.51162198585434]
We make empirical studies of state-of-the-art UniDA methods using foundation models.
We introduce textitCLIP distillation, a parameter-free method specifically designed to distill target knowledge from CLIP models.
Although simple, our method outperforms previous approaches in most benchmark tasks.
arXiv Detail & Related papers (2023-05-18T16:28:29Z) - Exploring Deep Models for Practical Gait Recognition [11.185716724976414]
We present a unified perspective to explore how to construct deep models for state-of-the-art outdoor gait recognition.
Specifically, we challenge the stereotype of shallow gait models and demonstrate the superiority of explicit temporal modeling.
The proposed CNN-based DeepGaitV2 series and Transformer-based SwinGait series exhibit significant performance improvements on Gait3D and GREW.
arXiv Detail & Related papers (2023-03-06T17:19:28Z) - OpenGait: Revisiting Gait Recognition Toward Better Practicality [19.998635762435878]
We first develop a flexible and efficient gait recognition named OpenGait.
Inspired by these discoveries, we develop a structurally simple, empirically powerful, and practically robust baseline model, GaitBase.
arXiv Detail & Related papers (2022-11-12T07:24:29Z) - Multi-Modal Human Authentication Using Silhouettes, Gait and RGB [59.46083527510924]
Whole-body-based human authentication is a promising approach for remote biometrics scenarios.
We propose Dual-Modal Ensemble (DME), which combines both RGB and silhouette data to achieve more robust performances for indoor and outdoor whole-body based recognition.
Within DME, we propose GaitPattern, which is inspired by the double helical gait pattern used in traditional gait analysis.
arXiv Detail & Related papers (2022-10-08T15:17:32Z) - Learning Gait Representation from Massive Unlabelled Walking Videos: A
Benchmark [11.948554539954673]
This paper proposes a large-scale self-supervised benchmark for gait recognition with contrastive learning.
We collect a large-scale unlabelled gait dataset GaitLU-1M consisting of 1.02M walking sequences.
We evaluate the pre-trained model on four widely-used gait benchmarks, CASIA-B, OU-M, GREW and Gait3D with or without transfer learning.
arXiv Detail & Related papers (2022-06-28T12:33:42Z) - Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based
Baseline [95.88825497452716]
Gait benchmarks empower the research community to train and evaluate high-performance gait recognition systems.
GREW is the first large-scale dataset for gait recognition in the wild.
SPOSGait is the first NAS-based gait recognition model.
arXiv Detail & Related papers (2022-05-05T14:57:39Z) - Towards a Deeper Understanding of Skeleton-based Gait Recognition [4.812321790984493]
In recent years, most gait recognition methods used the person's silhouette to extract the gait features.
Model-based methods do not suffer from these problems and are able to represent the temporal motion of body joints.
In this work, we propose an approach based on Graph Convolutional Networks (GCNs) that combines higher-order inputs, and residual networks.
arXiv Detail & Related papers (2022-04-16T18:23:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.