GaitMAST: Motion-Aware Spatio-Temporal Feature Learning Network for
Cross-View Gait Recognition
- URL: http://arxiv.org/abs/2210.11817v1
- Date: Fri, 21 Oct 2022 08:42:00 GMT
- Title: GaitMAST: Motion-Aware Spatio-Temporal Feature Learning Network for
Cross-View Gait Recognition
- Authors: Jingqi Li, Jiaqi Gao, Yuzhen Zhang, Hongming Shan, Junping Zhang
- Abstract summary: We propose GaitMAST, which can unleash the potential of motion-aware features.
GitMAST preserves the individual's unique walking patterns well.
Our model achieves an average rank-1 accuracy of 98.1%.
- Score: 32.76653659564304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a unique biometric that can be perceived at a distance, gait has broad
applications in person authentication, social security and so on. Existing gait
recognition methods pay attention to extracting either spatial or
spatiotemporal representations. However, they barely consider extracting
diverse motion features, a fundamental characteristic in gaits, from gait
sequences. In this paper, we propose a novel motion-aware spatiotemporal
feature learning network for gait recognition, termed GaitMAST, which can
unleash the potential of motion-aware features. In the shallow layer,
specifically, we propose a dual-path frame-level feature extractor, in which
one path extracts overall spatiotemporal features and the other extracts motion
salient features by focusing on dynamic regions. In the deeper layers, we
design a two-branch clip-level feature extractor, in which one focuses on
fine-grained spatial information and the other on motion detail preservation.
Consequently, our GaitMAST preserves the individual's unique walking patterns
well, further enhancing the robustness of spatiotemporal features. Extensive
experimental results on two commonly-used cross-view gait datasets demonstrate
the superior performance of GaitMAST over existing state-of-the-art methods. On
CASIA-B, our model achieves an average rank-1 accuracy of 94.1%. In particular,
GaitMAST achieves rank-1 accuracies of 96.1% and 88.1% under the bag-carry and
coat wearing conditions, respectively, outperforming the second best by a large
margin and demonstrating its robustness against spatial variations.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.