Learning Gait Representation from Massive Unlabelled Walking Videos: A
Benchmark
- URL: http://arxiv.org/abs/2206.13964v2
- Date: Mon, 4 Sep 2023 07:12:45 GMT
- Title: Learning Gait Representation from Massive Unlabelled Walking Videos: A
Benchmark
- Authors: Chao Fan, Saihui Hou, Jilong Wang, Yongzhen Huang, and Shiqi Yu
- Abstract summary: This paper proposes a large-scale self-supervised benchmark for gait recognition with contrastive learning.
We collect a large-scale unlabelled gait dataset GaitLU-1M consisting of 1.02M walking sequences.
We evaluate the pre-trained model on four widely-used gait benchmarks, CASIA-B, OU-M, GREW and Gait3D with or without transfer learning.
- Score: 11.948554539954673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gait depicts individuals' unique and distinguishing walking patterns and has
become one of the most promising biometric features for human identification.
As a fine-grained recognition task, gait recognition is easily affected by many
factors and usually requires a large amount of completely annotated data that
is costly and insatiable. This paper proposes a large-scale self-supervised
benchmark for gait recognition with contrastive learning, aiming to learn the
general gait representation from massive unlabelled walking videos for
practical applications via offering informative walking priors and diverse
real-world variations. Specifically, we collect a large-scale unlabelled gait
dataset GaitLU-1M consisting of 1.02M walking sequences and propose a
conceptually simple yet empirically powerful baseline model GaitSSB.
Experimentally, we evaluate the pre-trained model on four widely-used gait
benchmarks, CASIA-B, OU-MVLP, GREW and Gait3D with or without transfer
learning. The unsupervised results are comparable to or even better than the
early model-based and GEI-based methods. After transfer learning, our method
outperforms existing methods by a large margin in most cases. Theoretically, we
discuss the critical issues for gait-specific contrastive framework and present
some insights for further study. As far as we know, GaitLU-1M is the first
large-scale unlabelled gait dataset, and GaitSSB is the first method that
achieves remarkable unsupervised results on the aforementioned benchmarks. The
source code of GaitSSB will be integrated into OpenGait which is available at
https://github.com/ShiqiYu/OpenGait.
Related papers
- OpenGait: A Comprehensive Benchmark Study for Gait Recognition towards Better Practicality [11.64292241875791]
We first develop OpenGait, a flexible and efficient gait recognition platform.
Using OpenGait as a foundation, we conduct in-depth ablation experiments to revisit recent developments in gait recognition.
Inspired by these findings, we develop three structurally simple yet empirically powerful and practically robust baseline models.
arXiv Detail & Related papers (2024-05-15T07:11:12Z) - Memory Consistency Guided Divide-and-Conquer Learning for Generalized
Category Discovery [56.172872410834664]
Generalized category discovery (GCD) aims at addressing a more realistic and challenging setting of semi-supervised learning.
We propose a Memory Consistency guided Divide-and-conquer Learning framework (MCDL)
Our method outperforms state-of-the-art models by a large margin on both seen and unseen classes of the generic image recognition.
arXiv Detail & Related papers (2024-01-24T09:39:45Z) - GaitFormer: Learning Gait Representations with Noisy Multi-Task Learning [4.831663144935878]
We propose DenseGait, the largest dataset for pretraining gait analysis systems containing 217K anonymized tracklets.
We also propose GaitFormer, a transformer-based model that achieves 92.5% accuracy on CASIA-B and 85.33% on FVG.
arXiv Detail & Related papers (2023-10-30T10:28:44Z) - Distillation-guided Representation Learning for Unconstrained Gait Recognition [50.0533243584942]
We propose a framework, termed GAit DEtection and Recognition (GADER), for human authentication in challenging outdoor scenarios.
GADER builds discriminative features through a novel gait recognition method, where only frames containing gait information are used.
We evaluate our method on multiple State-of-The-Arts(SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets.
arXiv Detail & Related papers (2023-07-27T01:53:57Z) - OpenGait: Revisiting Gait Recognition Toward Better Practicality [19.998635762435878]
We first develop a flexible and efficient gait recognition named OpenGait.
Inspired by these discoveries, we develop a structurally simple, empirically powerful, and practically robust baseline model, GaitBase.
arXiv Detail & Related papers (2022-11-12T07:24:29Z) - Multi-Modal Human Authentication Using Silhouettes, Gait and RGB [59.46083527510924]
Whole-body-based human authentication is a promising approach for remote biometrics scenarios.
We propose Dual-Modal Ensemble (DME), which combines both RGB and silhouette data to achieve more robust performances for indoor and outdoor whole-body based recognition.
Within DME, we propose GaitPattern, which is inspired by the double helical gait pattern used in traditional gait analysis.
arXiv Detail & Related papers (2022-10-08T15:17:32Z) - Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based
Baseline [95.88825497452716]
Gait benchmarks empower the research community to train and evaluate high-performance gait recognition systems.
GREW is the first large-scale dataset for gait recognition in the wild.
SPOSGait is the first NAS-based gait recognition model.
arXiv Detail & Related papers (2022-05-05T14:57:39Z) - Towards Good Practices for Efficiently Annotating Large-Scale Image
Classification Datasets [90.61266099147053]
We investigate efficient annotation strategies for collecting multi-class classification labels for a large collection of images.
We propose modifications and best practices aimed at minimizing human labeling effort.
Simulated experiments on a 125k image subset of the ImageNet100 show that it can be annotated to 80% top-1 accuracy with 0.35 annotations per image on average.
arXiv Detail & Related papers (2021-04-26T16:29:32Z) - SelfGait: A Spatiotemporal Representation Learning Method for
Self-supervised Gait Recognition [24.156710529672775]
Gait recognition plays a vital role in human identification since gait is a unique biometric feature that can be perceived at a distance.
Existing gait recognition methods can learn gait features from gait sequences in different ways, but the performance of gait recognition suffers from labeled data.
We propose a self-supervised gait recognition method, termed SelfGait, which takes advantage of the massive, diverse, unlabeled gait data as a pre-training process.
arXiv Detail & Related papers (2021-03-27T05:15:39Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.