DyGait: Exploiting Dynamic Representations for High-performance Gait
Recognition
- URL: http://arxiv.org/abs/2303.14953v1
- Date: Mon, 27 Mar 2023 07:36:47 GMT
- Title: DyGait: Exploiting Dynamic Representations for High-performance Gait
Recognition
- Authors: Ming Wang, Xianda Guo, Beibei Lin, Tian Yang, Zheng Zhu, Lincheng Li,
Shunli Zhang and Xin Yu
- Abstract summary: Gait recognition is a biometric technology that recognizes the identity of humans through their walking patterns.
We propose a novel and high-performance framework named DyGait to focus on the extraction of dynamic features.
Our network achieves an average Rank-1 accuracy of 71.4% on the GREW dataset, 66.3% on the Gait3D dataset, 98.4% on the CASIA-B dataset and 98.3% on the OU-M dataset.
- Score: 35.642868929840034
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gait recognition is a biometric technology that recognizes the identity of
humans through their walking patterns. Compared with other biometric
technologies, gait recognition is more difficult to disguise and can be applied
to the condition of long-distance without the cooperation of subjects. Thus, it
has unique potential and wide application for crime prevention and social
security. At present, most gait recognition methods directly extract features
from the video frames to establish representations. However, these
architectures learn representations from different features equally but do not
pay enough attention to dynamic features, which refers to a representation of
dynamic parts of silhouettes over time (e.g. legs). Since dynamic parts of the
human body are more informative than other parts (e.g. bags) during walking, in
this paper, we propose a novel and high-performance framework named DyGait.
This is the first framework on gait recognition that is designed to focus on
the extraction of dynamic features. Specifically, to take full advantage of the
dynamic information, we propose a Dynamic Augmentation Module (DAM), which can
automatically establish spatial-temporal feature representations of the dynamic
parts of the human body. The experimental results show that our DyGait network
outperforms other state-of-the-art gait recognition methods. It achieves an
average Rank-1 accuracy of 71.4% on the GREW dataset, 66.3% on the Gait3D
dataset, 98.4% on the CASIA-B dataset and 98.3% on the OU-MVLP dataset.
Related papers
- GaitPT: Skeletons Are All You Need For Gait Recognition [4.089889918897877]
We propose a novel gait recognition architecture called Gait Pyramid Transformer (GaitPT)
GaitPT uses pose estimation skeletons to capture unique walking patterns, without relying on appearance information.
Our results show that GaitPT achieves state-of-the-art performance compared to other skeleton-based gait recognition works.
arXiv Detail & Related papers (2023-08-21T10:47:52Z) - Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset
and Baseline Performances [87.20906333918032]
We introduce a new dataset called Robot Control Gestures (RoCoG-v2)
The dataset is composed of both real and synthetic videos from seven gesture classes.
We present results using state-of-the-art action recognition and domain adaptation algorithms.
arXiv Detail & Related papers (2023-03-17T23:23:55Z) - Multi-Modal Human Authentication Using Silhouettes, Gait and RGB [59.46083527510924]
Whole-body-based human authentication is a promising approach for remote biometrics scenarios.
We propose Dual-Modal Ensemble (DME), which combines both RGB and silhouette data to achieve more robust performances for indoor and outdoor whole-body based recognition.
Within DME, we propose GaitPattern, which is inspired by the double helical gait pattern used in traditional gait analysis.
arXiv Detail & Related papers (2022-10-08T15:17:32Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - Snapture -- A Novel Neural Architecture for Combined Static and Dynamic
Hand Gesture Recognition [19.320551882950706]
We propose a novel hybrid hand gesture recognition system.
Our architecture enables learning both static and dynamic gestures.
Our work contributes both to gesture recognition research and machine learning applications for non-verbal communication with robots.
arXiv Detail & Related papers (2022-05-28T11:12:38Z) - Gait Recognition in the Wild: A Large-scale Benchmark and NAS-based
Baseline [95.88825497452716]
Gait benchmarks empower the research community to train and evaluate high-performance gait recognition systems.
GREW is the first large-scale dataset for gait recognition in the wild.
SPOSGait is the first NAS-based gait recognition model.
arXiv Detail & Related papers (2022-05-05T14:57:39Z) - Towards a Deeper Understanding of Skeleton-based Gait Recognition [4.812321790984493]
In recent years, most gait recognition methods used the person's silhouette to extract the gait features.
Model-based methods do not suffer from these problems and are able to represent the temporal motion of body joints.
In this work, we propose an approach based on Graph Convolutional Networks (GCNs) that combines higher-order inputs, and residual networks.
arXiv Detail & Related papers (2022-04-16T18:23:37Z) - Facial Emotion Recognition using Deep Residual Networks in Real-World
Environments [5.834678345946704]
We propose a facial feature extractor model trained on an in-the-wild and massively collected video dataset.
The dataset consists of a million labelled frames and 2,616 thousand subjects.
As temporal information is important to the emotion recognition domain, we utilise LSTM cells to capture the temporal dynamics in the data.
arXiv Detail & Related papers (2021-11-04T10:08:22Z) - Skeleton-Based Mutually Assisted Interacted Object Localization and
Human Action Recognition [111.87412719773889]
We propose a joint learning framework for "interacted object localization" and "human action recognition" based on skeleton data.
Our method achieves the best or competitive performance with the state-of-the-art methods for human action recognition.
arXiv Detail & Related papers (2021-10-28T10:09:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.