Distillation-guided Representation Learning for Unconstrained Gait Recognition
- URL: http://arxiv.org/abs/2307.14578v2
- Date: Sun, 13 Oct 2024 20:01:34 GMT
- Title: Distillation-guided Representation Learning for Unconstrained Gait Recognition
- Authors: Yuxiang Guo, Siyuan Huang, Ram Prabhakar, Chun Pong Lau, Rama Chellappa, Cheng Peng,
- Abstract summary: We propose a framework, termed GAit DEtection and Recognition (GADER), for human authentication in challenging outdoor scenarios.
GADER builds discriminative features through a novel gait recognition method, where only frames containing gait information are used.
We evaluate our method on multiple State-of-The-Arts(SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets.
- Score: 50.0533243584942
- License:
- Abstract: Gait recognition holds the promise of robustly identifying subjects based on walking patterns instead of appearance information. While previous approaches have performed well for curated indoor data, they tend to underperform in unconstrained situations, e.g. in outdoor, long distance scenes, etc. We propose a framework, termed GAit DEtection and Recognition (GADER), for human authentication in challenging outdoor scenarios. Specifically, GADER leverages a Double Helical Signature to detect segments that contain human movement and builds discriminative features through a novel gait recognition method, where only frames containing gait information are used. To further enhance robustness, GADER encodes viewpoint information in its architecture, and distills representation from an auxiliary RGB recognition model, which enables GADER to learn from silhouette and RGB data at training time. At test time, GADER only infers from the silhouette modality. We evaluate our method on multiple State-of-The-Arts(SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets, especially with a significant 25.2% improvement on unconstrained, remote gait data.
Related papers
- Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Multi-Modal Human Authentication Using Silhouettes, Gait and RGB [59.46083527510924]
Whole-body-based human authentication is a promising approach for remote biometrics scenarios.
We propose Dual-Modal Ensemble (DME), which combines both RGB and silhouette data to achieve more robust performances for indoor and outdoor whole-body based recognition.
Within DME, we propose GaitPattern, which is inspired by the double helical gait pattern used in traditional gait analysis.
arXiv Detail & Related papers (2022-10-08T15:17:32Z) - Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based
Baseline [95.88825497452716]
Gait benchmarks empower the research community to train and evaluate high-performance gait recognition systems.
GREW is the first large-scale dataset for gait recognition in the wild.
SPOSGait is the first NAS-based gait recognition model.
arXiv Detail & Related papers (2022-05-05T14:57:39Z) - Towards a Deeper Understanding of Skeleton-based Gait Recognition [4.812321790984493]
In recent years, most gait recognition methods used the person's silhouette to extract the gait features.
Model-based methods do not suffer from these problems and are able to represent the temporal motion of body joints.
In this work, we propose an approach based on Graph Convolutional Networks (GCNs) that combines higher-order inputs, and residual networks.
arXiv Detail & Related papers (2022-04-16T18:23:37Z) - RealGait: Gait Recognition for Person Re-Identification [79.67088297584762]
We construct a new gait dataset by extracting silhouettes from an existing video person re-identification challenge which consists of 1,404 persons walking in an unconstrained manner.
Our results suggest that recognizing people by their gait in real surveillance scenarios is feasible and the underlying gait pattern is probably the true reason why video person re-idenfification works in practice.
arXiv Detail & Related papers (2022-01-13T06:30:56Z) - Model-based gait recognition using graph network on very large
population database [3.8707695363745223]
In this paper, to resist the increase of subjects and views variation, local features are built and a siamese network is proposed.
Experiments on the very large population dataset named OUM-Pose and the popular dataset, CASIA-B, show that our method archives some state-of-the-art (SOTA) performances in model-based gait recognition.
arXiv Detail & Related papers (2021-12-20T02:28:02Z) - SelfGait: A Spatiotemporal Representation Learning Method for
Self-supervised Gait Recognition [24.156710529672775]
Gait recognition plays a vital role in human identification since gait is a unique biometric feature that can be perceived at a distance.
Existing gait recognition methods can learn gait features from gait sequences in different ways, but the performance of gait recognition suffers from labeled data.
We propose a self-supervised gait recognition method, termed SelfGait, which takes advantage of the massive, diverse, unlabeled gait data as a pre-training process.
arXiv Detail & Related papers (2021-03-27T05:15:39Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.