Gait Recognition Using 3-D Human Body Shape Inference
- URL: http://arxiv.org/abs/2212.09042v1
- Date: Sun, 18 Dec 2022 09:27:00 GMT
- Title: Gait Recognition Using 3-D Human Body Shape Inference
- Authors: Haidong Zhu, Zhaoheng Zheng, Ram Nevatia
- Abstract summary: We present the usage of inferring 3-D body shapes distilled from limited images.
We provide a method for learning 3-D body inference from silhouettes by transferring knowledge from 3-D shape prior to RGB photos.
- Score: 22.385670309906352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gait recognition, which identifies individuals based on their walking
patterns, is an important biometric technique since it can be observed from a
distance and does not require the subject's cooperation. Recognizing a person's
gait is difficult because of the appearance variants in human silhouette
sequences produced by varying viewing angles, carrying objects, and clothing.
Recent research has produced a number of ways for coping with these variants.
In this paper, we present the usage of inferring 3-D body shapes distilled from
limited images, which are, in principle, invariant to the specified variants.
Inference of 3-D shape is a difficult task, especially when only silhouettes
are provided in a dataset. We provide a method for learning 3-D body inference
from silhouettes by transferring knowledge from 3-D shape prior from RGB
photos. We use our method on multiple existing state-of-the-art gait baselines
and obtain consistent improvements for gait identification on two public
datasets, CASIA-B and OUMVLP, on several variants and settings, including a new
setting of novel views not seen during training.
Related papers
- Exploring Shape Embedding for Cloth-Changing Person Re-Identification
via 2D-3D Correspondences [9.487097819140653]
We propose a new shape embedding paradigm for cloth-changing ReID.
The shape embedding paradigm based on 2D-3D correspondences remarkably enhances the model's global understanding of human body shape.
To promote the study of ReID under clothing change, we construct 3D Dense Persons (DP3D), which is the first large-scale cloth-changing ReID dataset.
arXiv Detail & Related papers (2023-10-27T19:26:30Z) - Learning Clothing and Pose Invariant 3D Shape Representation for
Long-Term Person Re-Identification [16.797826602710035]
We aim to extend LT-ReID beyond pedestrian recognition to include a wider range of real-world human activities.
This setting poses additional challenges due to the geometric misalignment and appearance ambiguity caused by the diversity of human pose and clothing.
We propose a new approach 3DInvarReID for disentangling identity from non-identity components.
arXiv Detail & Related papers (2023-08-21T11:51:46Z) - SAOR: Single-View Articulated Object Reconstruction [17.2716639564414]
We introduce SAOR, a novel approach for estimating the 3D shape, texture, and viewpoint of an articulated object from a single image captured in the wild.
Unlike prior approaches that rely on pre-defined category-specific 3D templates or tailored 3D skeletons, SAOR learns to articulate shapes from single-view image collections with a skeleton-free part-based model without requiring any 3D object shape priors.
arXiv Detail & Related papers (2023-03-23T17:59:35Z) - Towards Hard-pose Virtual Try-on via 3D-aware Global Correspondence
Learning [70.75369367311897]
3D-aware global correspondences are reliable flows that jointly encode global semantic correlations, local deformations, and geometric priors of 3D human bodies.
An adversarial generator takes the garment warped by the 3D-aware flow, and the image of the target person as inputs, to synthesize the photo-realistic try-on result.
arXiv Detail & Related papers (2022-11-25T12:16:21Z) - Gait Recognition in the Wild with Dense 3D Representations and A
Benchmark [86.68648536257588]
Existing studies for gait recognition are dominated by 2D representations like the silhouette or skeleton of the human body in constrained scenes.
This paper aims to explore dense 3D representations for gait recognition in the wild.
We build the first large-scale 3D representation-based gait recognition dataset, named Gait3D.
arXiv Detail & Related papers (2022-04-06T03:54:06Z) - Learning Canonical 3D Object Representation for Fine-Grained Recognition [77.33501114409036]
We propose a novel framework for fine-grained object recognition that learns to recover object variation in 3D space from a single image.
We represent an object as a composition of 3D shape and its appearance, while eliminating the effect of camera viewpoint.
By incorporating 3D shape and appearance jointly in a deep representation, our method learns the discriminative representation of the object.
arXiv Detail & Related papers (2021-08-10T12:19:34Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - 3D Convolution Neural Network based Person Identification using Gait
cycles [0.0]
In this work, gait features are used to identify an individual. The steps involve object detection, background subtraction, silhouettes extraction, skeletonization, and training 3D Convolution Neural Network on these gait features.
The proposed method focuses more on the lower body part to extract features such as the angle between knee and thighs, hip angle, angle of contact, and many other features.
arXiv Detail & Related papers (2021-06-06T14:27:06Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z) - Chained Representation Cycling: Learning to Estimate 3D Human Pose and
Shape by Cycling Between Representations [73.11883464562895]
We propose a new architecture that facilitates unsupervised, or lightly supervised, learning.
We demonstrate the method by learning 3D human pose and shape from un-paired and un-annotated images.
While we present results for modeling humans, our formulation is general and can be applied to other vision problems.
arXiv Detail & Related papers (2020-01-06T14:54:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.