Skeleton-Aware Networks for Deep Motion Retargeting
- URL: http://arxiv.org/abs/2005.05732v1
- Date: Tue, 12 May 2020 12:51:40 GMT
- Title: Skeleton-Aware Networks for Deep Motion Retargeting
- Authors: Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-Hornung,
Daniel Cohen-Or, Baoquan Chen
- Abstract summary: We introduce a novel deep learning framework for data-driven motion between skeletons.
Our approach learns how to retarget without requiring any explicit pairing between the motions in the training set.
- Score: 83.65593033474384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel deep learning framework for data-driven motion
retargeting between skeletons, which may have different structure, yet
corresponding to homeomorphic graphs. Importantly, our approach learns how to
retarget without requiring any explicit pairing between the motions in the
training set. We leverage the fact that different homeomorphic skeletons may be
reduced to a common primal skeleton by a sequence of edge merging operations,
which we refer to as skeletal pooling. Thus, our main technical contribution is
the introduction of novel differentiable convolution, pooling, and unpooling
operators. These operators are skeleton-aware, meaning that they explicitly
account for the skeleton's hierarchical structure and joint adjacency, and
together they serve to transform the original motion into a collection of deep
temporal features associated with the joints of the primal skeleton. In other
words, our operators form the building blocks of a new deep motion processing
framework that embeds the motion into a common latent space, shared by a
collection of homeomorphic skeletons. Thus, retargeting can be achieved simply
by encoding to, and decoding from this latent space. Our experiments show the
effectiveness of our framework for motion retargeting, as well as motion
processing in general, compared to existing approaches. Our approach is also
quantitatively evaluated on a synthetic dataset that contains pairs of motions
applied to different skeletons. To the best of our knowledge, our method is the
first to perform retargeting between skeletons with differently sampled
kinematic chains, without any paired examples.
Related papers
- Stitch Contrast and Segment_Learning a Human Action Segmentation Model Using Trimmed Skeleton Videos [3.069335774032178]
This paper presents a novel framework for skeleton-based action segmentation trained on short trimmed skeleton videos.
It is implemented in three steps: Stitch, Contrast, and Segment.
Experiments involve a trimmed source dataset and an untrimmed target dataset.
arXiv Detail & Related papers (2024-12-19T16:00:10Z) - Motif Guided Graph Transformer with Combinatorial Skeleton Prototype Learning for Skeleton-Based Person Re-Identification [60.939250172443586]
Person re-identification (re-ID) via 3D skeleton data is a challenging task with significant value in many scenarios.
Existing skeleton-based methods typically assume virtual motion relations between all joints, and adopt average joint or sequence representations for learning.
This paper presents a generic Motif guided graph transformer with Combinatorial skeleton prototype learning (MoCos)
MoCos exploits structure-specific and gait-related body relations as well as features of skeleton graphs to learn effective skeleton representations for person re-ID.
arXiv Detail & Related papers (2024-12-12T08:13:29Z) - Neuron: Learning Context-Aware Evolving Representations for Zero-Shot Skeleton Action Recognition [64.56321246196859]
We propose a novel dyNamically Evolving dUal skeleton-semantic syneRgistic framework.
We first construct the spatial-temporal evolving micro-prototypes and integrate dynamic context-aware side information.
We introduce the spatial compression and temporal memory mechanisms to guide the growth of spatial-temporal micro-prototypes.
arXiv Detail & Related papers (2024-11-18T05:16:11Z) - SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition [25.341177384559174]
We propose a novel approach called Skeletal-Temporal Transformer (SkateFormer)
SkateFormer partitions joints and frames based on different types of skeletal-temporal relation.
It can selectively focus on key joints and frames crucial for action recognition in an action-adaptive manner.
arXiv Detail & Related papers (2024-03-14T15:55:53Z) - SkeleTR: Towrads Skeleton-based Action Recognition in the Wild [86.03082891242698]
SkeleTR is a new framework for skeleton-based action recognition.
It first models the intra-person skeleton dynamics for each skeleton sequence with graph convolutions.
It then uses stacked Transformer encoders to capture person interactions that are important for action recognition in general scenarios.
arXiv Detail & Related papers (2023-09-20T16:22:33Z) - LAC: Latent Action Composition for Skeleton-based Action Segmentation [21.797658771678066]
Skeleton-based action segmentation requires recognizing composable actions in untrimmed videos.
Current approaches decouple this problem by first extracting local visual features from skeleton sequences and then processing them by a temporal model to classify frame-wise actions.
We propose Latent Action Composition (LAC), a novel self-supervised framework aiming at learning from synthesized composable motions for skeleton-based action segmentation.
arXiv Detail & Related papers (2023-08-28T11:20:48Z) - MotioNet: 3D Human Motion Reconstruction from Monocular Video with
Skeleton Consistency [72.82534577726334]
We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video.
Our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation.
arXiv Detail & Related papers (2020-06-22T08:50:09Z) - Image Co-skeletonization via Co-segmentation [102.59781674888657]
We propose a new joint processing topic: image co-skeletonization.
Object skeletonization in a single natural image is a challenging problem because there is hardly any prior knowledge about the object.
We propose a coupled framework for co-skeletonization and co-segmentation tasks so that they are well informed by each other.
arXiv Detail & Related papers (2020-04-12T09:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.