GaitFormer: Learning Gait Representations with Noisy Multi-Task Learning
- URL: http://arxiv.org/abs/2310.19418v1
- Date: Mon, 30 Oct 2023 10:28:44 GMT
- Title: GaitFormer: Learning Gait Representations with Noisy Multi-Task Learning
- Authors: Adrian Cosma, Emilian Radoi
- Abstract summary: We propose DenseGait, the largest dataset for pretraining gait analysis systems containing 217K anonymized tracklets.
We also propose GaitFormer, a transformer-based model that achieves 92.5% accuracy on CASIA-B and 85.33% on FVG.
- Score: 4.831663144935878
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gait analysis is proven to be a reliable way to perform person identification
without relying on subject cooperation. Walking is a biometric that does not
significantly change in short periods of time and can be regarded as unique to
each person. So far, the study of gait analysis focused mostly on
identification and demographics estimation, without considering many of the
pedestrian attributes that appearance-based methods rely on. In this work,
alongside gait-based person identification, we explore pedestrian attribute
identification solely from movement patterns. We propose DenseGait, the largest
dataset for pretraining gait analysis systems containing 217K anonymized
tracklets, annotated automatically with 42 appearance attributes. DenseGait is
constructed by automatically processing video streams and offers the full array
of gait covariates present in the real world. We make the dataset available to
the research community. Additionally, we propose GaitFormer, a
transformer-based model that after pretraining in a multi-task fashion on
DenseGait, achieves 92.5% accuracy on CASIA-B and 85.33% on FVG, without
utilizing any manually annotated data. This corresponds to a +14.2% and +9.67%
accuracy increase compared to similar methods. Moreover, GaitFormer is able to
accurately identify gender information and a multitude of appearance attributes
utilizing only movement patterns. The code to reproduce the experiments is made
publicly.
Related papers
- Self-Training with Pseudo-Label Scorer for Aspect Sentiment Quad Prediction [54.23208041792073]
Aspect Sentiment Quad Prediction (ASQP) aims to predict all quads (aspect term, aspect category, opinion term, sentiment polarity) for a given review.
A key challenge in the ASQP task is the scarcity of labeled data, which limits the performance of existing methods.
We propose a self-training framework with a pseudo-label scorer, wherein a scorer assesses the match between reviews and their pseudo-labels.
arXiv Detail & Related papers (2024-06-26T05:30:21Z) - Learning to Simplify Spatial-Temporal Graphs in Gait Analysis [4.831663144935878]
This paper proposes a novel method to simplify the spatial-temporal graph representation for gait-based gender estimation.
Our approach employs two models, an upstream and a downstream model, that can adjust the adjacency matrix for each walking instance.
We demonstrate the effectiveness of our approach on the CASIA-B dataset for gait-based gender estimation.
arXiv Detail & Related papers (2023-10-05T09:03:51Z) - Distillation-guided Representation Learning for Unconstrained Gait Recognition [50.0533243584942]
We propose a framework, termed GAit DEtection and Recognition (GADER), for human authentication in challenging outdoor scenarios.
GADER builds discriminative features through a novel gait recognition method, where only frames containing gait information are used.
We evaluate our method on multiple State-of-The-Arts(SoTA) gait baselines and demonstrate consistent improvements on indoor and outdoor datasets.
arXiv Detail & Related papers (2023-07-27T01:53:57Z) - HomE: Homography-Equivariant Video Representation Learning [62.89516761473129]
We propose a novel method for representation learning of multi-view videos.
Our method learns an implicit mapping between different views, culminating in a representation space that maintains the homography relationship between neighboring views.
On action classification, our method obtains 96.4% 3-fold accuracy on the UCF101 dataset, better than most state-of-the-art self-supervised learning methods.
arXiv Detail & Related papers (2023-06-02T15:37:43Z) - Multi-Channel Time-Series Person and Soft-Biometric Identification [65.83256210066787]
This work investigates person and soft-biometrics identification from recordings of humans performing different activities using deep architectures.
We evaluate the method on four datasets of multi-channel time-series human activity recognition (HAR)
Soft-biometric based attribute representation shows promising results and emphasis the necessity of larger datasets.
arXiv Detail & Related papers (2023-04-04T07:24:51Z) - Learning Gait Representation from Massive Unlabelled Walking Videos: A
Benchmark [11.948554539954673]
This paper proposes a large-scale self-supervised benchmark for gait recognition with contrastive learning.
We collect a large-scale unlabelled gait dataset GaitLU-1M consisting of 1.02M walking sequences.
We evaluate the pre-trained model on four widely-used gait benchmarks, CASIA-B, OU-M, GREW and Gait3D with or without transfer learning.
arXiv Detail & Related papers (2022-06-28T12:33:42Z) - Label a Herd in Minutes: Individual Holstein-Friesian Cattle
Identification [12.493458478953515]
We describe a practically evaluated approach for training visual cattle ID systems for a whole farm requiring only ten minutes of labelling effort.
For the task of automatic identification of individual Holstein-Friesians in real-world farm CCTV, we show that self-supervision, metric learning, cluster analysis, and active learning can complement each other.
arXiv Detail & Related papers (2022-04-22T19:41:47Z) - RealGait: Gait Recognition for Person Re-Identification [79.67088297584762]
We construct a new gait dataset by extracting silhouettes from an existing video person re-identification challenge which consists of 1,404 persons walking in an unconstrained manner.
Our results suggest that recognizing people by their gait in real surveillance scenarios is feasible and the underlying gait pattern is probably the true reason why video person re-idenfification works in practice.
arXiv Detail & Related papers (2022-01-13T06:30:56Z) - WildGait: Learning Gait Representations from Raw Surveillance Streams [1.90365714903665]
Existing methods for gait recognition require cooperative gait scenarios, in which a single person is walking multiple times in a straight line in front of a camera.
We propose a novel weakly supervised learning framework, WildGait, which consists of training a Spatio-Temporal Graph Convolutional Network on a large number of automatically annotated skeleton sequences.
Our results show that, with fine-tuning, we surpass in terms of recognition accuracy the current state-of-the-art pose-based gait recognition solutions.
arXiv Detail & Related papers (2021-05-12T09:11:32Z) - SelfGait: A Spatiotemporal Representation Learning Method for
Self-supervised Gait Recognition [24.156710529672775]
Gait recognition plays a vital role in human identification since gait is a unique biometric feature that can be perceived at a distance.
Existing gait recognition methods can learn gait features from gait sequences in different ways, but the performance of gait recognition suffers from labeled data.
We propose a self-supervised gait recognition method, termed SelfGait, which takes advantage of the massive, diverse, unlabeled gait data as a pre-training process.
arXiv Detail & Related papers (2021-03-27T05:15:39Z) - Unsupervised Noisy Tracklet Person Re-identification [100.85530419892333]
We present a novel selective tracklet learning (STL) approach that can train discriminative person re-id models from unlabelled tracklet data.
This avoids the tedious and costly process of exhaustively labelling person image/tracklet true matching pairs across camera views.
Our method is particularly more robust against arbitrary noisy data of raw tracklets therefore scalable to learning discriminative models from unconstrained tracking data.
arXiv Detail & Related papers (2021-01-16T07:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.