JointsGait:A model-based Gait Recognition Method based on Gait Graph
Convolutional Networks and Joints Relationship Pyramid Mapping
- URL: http://arxiv.org/abs/2005.08625v2
- Date: Wed, 9 Dec 2020 09:12:03 GMT
- Title: JointsGait:A model-based Gait Recognition Method based on Gait Graph
Convolutional Networks and Joints Relationship Pyramid Mapping
- Authors: Na Li, Xinbo Zhao, Chong Ma
- Abstract summary: We research on using 2D joints to recognize gait in this paper.
JointsGait is put forward to extract gait information from 2D human body joints.
- Score: 6.851535012702575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gait, as one of unique biometric features, has the advantage of being
recognized from a long distance away, can be widely used in public security.
Considering 3D pose estimation is more challenging than 2D pose estimation in
practice , we research on using 2D joints to recognize gait in this paper, and
a new model-based gait recognition method JointsGait is put forward to extract
gait information from 2D human body joints. Appearance-based gait recognition
algorithms are prevalent before. However, appearance features suffer from
external factors which can cause drastic appearance variations, e.g. clothing.
Unlike previous approaches, JointsGait firstly extracted spatio-temporal
features from 2D joints using gait graph convolutional networks, which are less
interfered by external factors. Secondly, Joints Relationship Pyramid Mapping
(JRPM) are proposed to map spatio-temporal gait features into a discriminative
feature space with biological advantages according to the relationship of human
joints when people are walking at various scales. Finally, we design a fusion
loss strategy to help the joints features to be insensitive to cross-view. Our
method is evaluated on two large datasets, Kinect Gait Biometry Dataset and
CASIA-B. On Kinect Gait Biometry Dataset database, JointsGait only uses
corresponding 2D coordinates of joints, but achieves satisfactory recognition
accuracy compared with those model-based algorithms using 3D joints. On CASIA-B
database, the proposed method greatly outperforms advanced model-based methods
in all walking conditions, even performs superior to state-of-art
appearance-based methods when clothing seriously affect people's appearance.
The experimental results demonstrate that JointsGait achieves the state-of-art
performance despite the low dimensional feature (2D body joints) and is less
affected by the view variations and clothing variation.
Related papers
- HybridGait: A Benchmark for Spatial-Temporal Cloth-Changing Gait
Recognition with Hybrid Explorations [66.5809637340079]
We propose the first in-the-wild benchmark CCGait for cloth-changing gait recognition.
We exploit both temporal dynamics and the projected 2D information of 3D human meshes.
Our contributions are twofold: we provide a challenging benchmark CCGait that captures realistic appearance changes across an expanded and space.
arXiv Detail & Related papers (2023-12-30T16:12:13Z) - Spatio-temporal MLP-graph network for 3D human pose estimation [8.267311047244881]
Graph convolutional networks and their variants have shown significant promise in 3D human pose estimation.
We introduce a new weighted Jacobi feature rule obtained through graph filtering with implicit propagation fairing.
We also employ adjacency modulation with the aim of learning meaningful correlations beyond defined between body joints.
arXiv Detail & Related papers (2023-08-29T14:00:55Z) - Iterative Graph Filtering Network for 3D Human Pose Estimation [5.177947445379688]
Graph convolutional networks (GCNs) have proven to be an effective approach for 3D human pose estimation.
In this paper, we introduce an iterative graph filtering framework for 3D human pose estimation.
Our approach builds upon the idea of iteratively solving graph filtering with Laplacian regularization.
arXiv Detail & Related papers (2023-07-29T20:46:44Z) - Pose-Oriented Transformer with Uncertainty-Guided Refinement for
2D-to-3D Human Pose Estimation [51.00725889172323]
We propose a Pose-Oriented Transformer (POT) with uncertainty guided refinement for 3D human pose estimation.
We first develop novel pose-oriented self-attention mechanism and distance-related position embedding for POT to explicitly exploit the human skeleton topology.
We present an Uncertainty-Guided Refinement Network (UGRN) to refine pose predictions from POT, especially for the difficult joints.
arXiv Detail & Related papers (2023-02-15T00:22:02Z) - Multi-Modal Human Authentication Using Silhouettes, Gait and RGB [59.46083527510924]
Whole-body-based human authentication is a promising approach for remote biometrics scenarios.
We propose Dual-Modal Ensemble (DME), which combines both RGB and silhouette data to achieve more robust performances for indoor and outdoor whole-body based recognition.
Within DME, we propose GaitPattern, which is inspired by the double helical gait pattern used in traditional gait analysis.
arXiv Detail & Related papers (2022-10-08T15:17:32Z) - Towards a Deeper Understanding of Skeleton-based Gait Recognition [4.812321790984493]
In recent years, most gait recognition methods used the person's silhouette to extract the gait features.
Model-based methods do not suffer from these problems and are able to represent the temporal motion of body joints.
In this work, we propose an approach based on Graph Convolutional Networks (GCNs) that combines higher-order inputs, and residual networks.
arXiv Detail & Related papers (2022-04-16T18:23:37Z) - A Benchmark for Gait Recognition under Occlusion Collected by
Multi-Kinect SDAS [6.922350076348358]
We collect a new gait database called OG RGB+D database, which breaks through the limitation of other gait databases.
Azure Kinect DK can simultaneously collect multimodal data to support different types of gait recognition algorithms.
We propose a gait recognition method SkeletonGait based on human dual skeleton model.
arXiv Detail & Related papers (2021-07-19T16:01:18Z) - A hybrid classification-regression approach for 3D hand pose estimation
using graph convolutional networks [1.0152838128195467]
We propose a two-stage GCN-based framework that learns per-pose relationship constraints.
The first phase quantizes the 2D/3D space to classify the joints into 2D/3D blocks based on their locality.
The second stage uses a GCN-based module that uses an adaptative nearest neighbor algorithm to determine joint relationships.
arXiv Detail & Related papers (2021-05-23T10:09:10Z) - Pose And Joint-Aware Action Recognition [87.4780883700755]
We present a new model for joint-based action recognition, which first extracts motion features from each joint separately through a shared motion encoder.
Our joint selector module re-weights the joint information to select the most discriminative joints for the task.
We show large improvements over the current state-of-the-art joint-based approaches on JHMDB, HMDB, Charades, AVA action recognition datasets.
arXiv Detail & Related papers (2020-10-16T04:43:34Z) - DecAug: Augmenting HOI Detection via Decomposition [54.65572599920679]
Current algorithms suffer from insufficient training samples and category imbalance within datasets.
We propose an efficient and effective data augmentation method called DecAug for HOI detection.
Experiments show that our method brings up to 3.3 mAP and 1.6 mAP improvements on V-COCO and HICODET dataset.
arXiv Detail & Related papers (2020-10-02T13:59:05Z) - DRG: Dual Relation Graph for Human-Object Interaction Detection [65.50707710054141]
We tackle the challenging problem of human-object interaction (HOI) detection.
Existing methods either recognize the interaction of each human-object pair in isolation or perform joint inference based on complex appearance-based features.
In this paper, we leverage an abstract spatial-semantic representation to describe each human-object pair and aggregate the contextual information of the scene via a dual relation graph.
arXiv Detail & Related papers (2020-08-26T17:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.