Motion Gait: Gait Recognition via Motion Excitation
- URL: http://arxiv.org/abs/2206.11080v1
- Date: Wed, 22 Jun 2022 13:47:14 GMT
- Title: Motion Gait: Gait Recognition via Motion Excitation
- Authors: Yunpeng Zhang, Zhengyou Wang, Shanna Zhuang, Hui Wang
- Abstract summary: We propose Motion Excitation Module (MEM) to guide-temporal features to focus on human parts with large dynamic changes.
MEM learns the difference information between frames and intervals, so as to obtain the representation of changes temporal motion changes.
We present the Fine Feature Extractor (EFF), which independently learns according to the spatial-temporal representations of human horizontal parts of individuals.
- Score: 5.559482051571756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gait recognition, which can realize long-distance and contactless
identification, is an important biometric technology. Recent gait recognition
methods focus on learning the pattern of human movement or appearance during
walking, and construct the corresponding spatio-temporal representations.
However, different individuals have their own laws of movement patterns, simple
spatial-temporal features are difficult to describe changes in motion of human
parts, especially when confounding variables such as clothing and carrying are
included, thus distinguishability of features is reduced. In this paper, we
propose the Motion Excitation Module (MEM) to guide spatio-temporal features to
focus on human parts with large dynamic changes, MEM learns the difference
information between frames and intervals, so as to obtain the representation of
temporal motion changes, it is worth mentioning that MEM can adapt to frame
sequences with uncertain length, and it does not add any additional parameters.
Furthermore, we present the Fine Feature Extractor (FFE), which independently
learns the spatio-temporal representations of human body according to different
horizontal parts of individuals. Benefiting from MEM and FFE, our method
innovatively combines motion change information, significantly improving the
performance of the model under cross appearance conditions. On the popular
dataset CASIA-B, our proposed Motion Gait is better than the existing gait
recognition methods.
Related papers
- Priority-Centric Human Motion Generation in Discrete Latent Space [59.401128190423535]
We introduce a Priority-Centric Motion Discrete Diffusion Model (M2DM) for text-to-motion generation.
M2DM incorporates a global self-attention mechanism and a regularization term to counteract code collapse.
We also present a motion discrete diffusion model that employs an innovative noise schedule, determined by the significance of each motion token.
arXiv Detail & Related papers (2023-08-28T10:40:16Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - PACE: Data-Driven Virtual Agent Interaction in Dense and Cluttered
Environments [69.03289331433874]
We present PACE, a novel method for modifying motion-captured virtual agents to interact with and move throughout dense, cluttered 3D scenes.
Our approach changes a given motion sequence of a virtual agent as needed to adjust to the obstacles and objects in the environment.
We compare our method with prior motion generating techniques and highlight the benefits of our method with a perceptual study and physical plausibility metrics.
arXiv Detail & Related papers (2023-03-24T19:49:08Z) - Mutual Information-Based Temporal Difference Learning for Human Pose
Estimation in Video [16.32910684198013]
We present a novel multi-frame human pose estimation framework, which employs temporal differences across frames to model dynamic contexts.
To be specific, we design a multi-stage entangled learning sequences conditioned on multi-stage differences to derive informative motion representation sequences.
These place us to rank No.1 in the Crowd Pose Estimation in Complex Events Challenge on benchmark HiEve.
arXiv Detail & Related papers (2023-03-15T09:29:03Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - Spatio-temporal Gait Feature with Adaptive Distance Alignment [90.5842782685509]
We try to increase the difference of gait features of different subjects from two aspects: the optimization of network structure and the refinement of extracted gait features.
Our method is proposed, it consists of Spatio-temporal Feature Extraction (SFE) and Adaptive Distance Alignment (ADA)
ADA uses a large number of unlabeled gait data in real life as a benchmark to refine the extracted-temporal features to make them have low inter-class similarity and high intra-class similarity.
arXiv Detail & Related papers (2022-03-07T13:34:00Z) - Behavior Recognition Based on the Integration of Multigranular Motion
Features [17.052997301790693]
We propose a novel behavior recognition method based on the integration of multigranular (IMG) motion features.
We evaluate our model on several action recognition benchmarks such as HMDB51, Something-Something and UCF101.
arXiv Detail & Related papers (2022-03-07T02:05:26Z) - Efficient Modelling Across Time of Human Actions and Interactions [92.39082696657874]
We argue that current fixed-sized-temporal kernels in 3 convolutional neural networks (CNNDs) can be improved to better deal with temporal variations in the input.
We study how we can better handle between classes of actions, by enhancing their feature differences over different layers of the architecture.
The proposed approaches are evaluated on several benchmark action recognition datasets and show competitive results.
arXiv Detail & Related papers (2021-10-05T15:39:11Z) - TSI: Temporal Saliency Integration for Video Action Recognition [32.18535820790586]
We propose a Temporal Saliency Integration (TSI) block, which mainly contains a Salient Motion Excitation (SME) module and a Cross-scale Temporal Integration (CTI) module.
SME aims to highlight the motion-sensitive area through local-global motion modeling.
CTI is designed to perform multi-scale temporal modeling through a group of separate 1D convolutions respectively.
arXiv Detail & Related papers (2021-06-02T11:43:49Z) - Affective Movement Generation using Laban Effort and Shape and Hidden
Markov Models [6.181642248900806]
This paper presents an approach for automatic affective movement generation that makes use of two movement abstractions: 1) Laban movement analysis (LMA), and 2) hidden Markov modeling.
The LMA provides a systematic tool for an abstract representation of the kinematic and expressive characteristics of movements.
An HMM abstraction of the identified movements is obtained and used with the desired motion path to generate a novel movement that conveys the target emotion.
The efficacy of the proposed approach in generating movements with recognizable target emotions is assessed using a validated automatic recognition model and a user study.
arXiv Detail & Related papers (2020-06-10T21:24:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.