GaitMorph: Transforming Gait by Optimally Transporting Discrete Codes
- URL: http://arxiv.org/abs/2307.14713v1
- Date: Thu, 27 Jul 2023 09:09:28 GMT
- Title: GaitMorph: Transforming Gait by Optimally Transporting Discrete Codes
- Authors: Adrian Cosma, Emilian Radoi
- Abstract summary: We propose GaitMorph, a novel method to modify the walking variation for an input gait sequence.
Our method entails the training of a high-compression model for gait skeleton sequences that leverages unlabelled data.
We propose a method based on optimal transport theory to learn latent transport maps on the discrete codebook that morph gait sequences between variations.
- Score: 6.85316573653194
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gait, the manner of walking, has been proven to be a reliable biometric with
uses in surveillance, marketing and security. A promising new direction for the
field is training gait recognition systems without explicit human annotations,
through self-supervised learning approaches. Such methods are heavily reliant
on strong augmentations for the same walking sequence to induce more data
variability and to simulate additional walking variations. Current data
augmentation schemes are heuristic and cannot provide the necessary data
variation as they are only able to provide simple temporal and spatial
distortions. In this work, we propose GaitMorph, a novel method to modify the
walking variation for an input gait sequence. Our method entails the training
of a high-compression model for gait skeleton sequences that leverages
unlabelled data to construct a discrete and interpretable latent space, which
preserves identity-related features. Furthermore, we propose a method based on
optimal transport theory to learn latent transport maps on the discrete
codebook that morph gait sequences between variations. We perform extensive
experiments and show that our method is suitable to synthesize additional views
for an input sequence.
Related papers
- Semantically Rich Local Dataset Generation for Explainable AI in Genomics [0.716879432974126]
Black box deep learning models trained on genomic sequences excel at predicting the outcomes of different gene regulatory mechanisms.
We propose using Genetic Programming to generate datasets by evolving perturbations in sequences that contribute to their semantic diversity.
arXiv Detail & Related papers (2024-07-03T10:31:30Z) - Segue: Side-information Guided Generative Unlearnable Examples for
Facial Privacy Protection in Real World [64.4289385463226]
We propose Segue: Side-information guided generative unlearnable examples.
To improve transferability, we introduce side information such as true labels and pseudo labels.
It can resist JPEG compression, adversarial training, and some standard data augmentations.
arXiv Detail & Related papers (2023-10-24T06:22:37Z) - End-to-End Training of a Neural HMM with Label and Transition
Probabilities [36.32865468394113]
We investigate a novel modeling approach for end-to-end neural network training using hidden Markov models (HMM)
In our approach there are explicit, learnable probabilities for transitions between segments as opposed to a blank label that implicitly encodes duration statistics.
We find that while the transition model training does not improve recognition performance, it has a positive impact on the alignment quality.
arXiv Detail & Related papers (2023-10-04T10:56:00Z) - ProFormer: Learning Data-efficient Representations of Body Movement with
Prototype-based Feature Augmentation and Visual Transformers [31.908276711898548]
Methods for data-efficient recognition from body poses increasingly leverage skeleton sequences structured as image-like arrays.
We look at this paradigm from the perspective of transformer networks, for the first time exploring visual transformers as data-efficient encoders of skeleton movement.
In our pipeline, body pose sequences cast as image-like representations are converted into patch embeddings and then passed to a visual transformer backbone optimized with deep metric learning.
arXiv Detail & Related papers (2022-02-23T11:11:54Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Contrastively Disentangled Sequential Variational Autoencoder [20.75922928324671]
We propose a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE)
We use a novel evidence lower bound which maximizes the mutual information between the input and the latent factors, while penalizes the mutual information between the static and dynamic factors.
Our experiments show that C-DSVAE significantly outperforms the previous state-of-the-art methods on multiple metrics.
arXiv Detail & Related papers (2021-10-22T23:00:32Z) - Continuous Transition: Improving Sample Efficiency for Continuous
Control Problems via MixUp [119.69304125647785]
This paper introduces a concise yet powerful method to construct Continuous Transition.
Specifically, we propose to synthesize new transitions for training by linearly interpolating the consecutive transitions.
To keep the constructed transitions authentic, we also develop a discriminator to guide the construction process automatically.
arXiv Detail & Related papers (2020-11-30T01:20:23Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Meta Transition Adaptation for Robust Deep Learning with Noisy Labels [61.8970957519509]
This study proposes a new meta-transition-learning strategy for the task.
Specifically, through the sound guidance of a small set of meta data with clean labels, the noise transition matrix and the classifier parameters can be mutually ameliorated.
Our method can more accurately extract the transition matrix, naturally following its more robust performance than prior arts.
arXiv Detail & Related papers (2020-06-10T07:27:25Z) - Dynamic Scale Training for Object Detection [111.33112051962514]
We propose a Dynamic Scale Training paradigm (abbreviated as DST) to mitigate scale variation challenge in object detection.
Experimental results demonstrate the efficacy of our proposed DST towards scale variation handling.
It does not introduce inference overhead and could serve as a free lunch for general detection configurations.
arXiv Detail & Related papers (2020-04-26T16:48:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.