Skeleton-Aware Networks for Deep Motion Retargeting
- URL: http://arxiv.org/abs/2005.05732v1
- Date: Tue, 12 May 2020 12:51:40 GMT
- Title: Skeleton-Aware Networks for Deep Motion Retargeting
- Authors: Kfir Aberman, Peizhuo Li, Dani Lischinski, Olga Sorkine-Hornung,
Daniel Cohen-Or, Baoquan Chen
- Abstract summary: We introduce a novel deep learning framework for data-driven motion between skeletons.
Our approach learns how to retarget without requiring any explicit pairing between the motions in the training set.
- Score: 83.65593033474384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel deep learning framework for data-driven motion
retargeting between skeletons, which may have different structure, yet
corresponding to homeomorphic graphs. Importantly, our approach learns how to
retarget without requiring any explicit pairing between the motions in the
training set. We leverage the fact that different homeomorphic skeletons may be
reduced to a common primal skeleton by a sequence of edge merging operations,
which we refer to as skeletal pooling. Thus, our main technical contribution is
the introduction of novel differentiable convolution, pooling, and unpooling
operators. These operators are skeleton-aware, meaning that they explicitly
account for the skeleton's hierarchical structure and joint adjacency, and
together they serve to transform the original motion into a collection of deep
temporal features associated with the joints of the primal skeleton. In other
words, our operators form the building blocks of a new deep motion processing
framework that embeds the motion into a common latent space, shared by a
collection of homeomorphic skeletons. Thus, retargeting can be achieved simply
by encoding to, and decoding from this latent space. Our experiments show the
effectiveness of our framework for motion retargeting, as well as motion
processing in general, compared to existing approaches. Our approach is also
quantitatively evaluated on a synthetic dataset that contains pairs of motions
applied to different skeletons. To the best of our knowledge, our method is the
first to perform retargeting between skeletons with differently sampled
kinematic chains, without any paired examples.
Related papers
- SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition [25.341177384559174]
We propose a novel approach called Skeletal-Temporal Transformer (SkateFormer)
SkateFormer partitions joints and frames based on different types of skeletal-temporal relation.
It can selectively focus on key joints and frames crucial for action recognition in an action-adaptive manner.
arXiv Detail & Related papers (2024-03-14T15:55:53Z) - SkeleTR: Towrads Skeleton-based Action Recognition in the Wild [86.03082891242698]
SkeleTR is a new framework for skeleton-based action recognition.
It first models the intra-person skeleton dynamics for each skeleton sequence with graph convolutions.
It then uses stacked Transformer encoders to capture person interactions that are important for action recognition in general scenarios.
arXiv Detail & Related papers (2023-09-20T16:22:33Z) - LAC: Latent Action Composition for Skeleton-based Action Segmentation [21.797658771678066]
Skeleton-based action segmentation requires recognizing composable actions in untrimmed videos.
Current approaches decouple this problem by first extracting local visual features from skeleton sequences and then processing them by a temporal model to classify frame-wise actions.
We propose Latent Action Composition (LAC), a novel self-supervised framework aiming at learning from synthesized composable motions for skeleton-based action segmentation.
arXiv Detail & Related papers (2023-08-28T11:20:48Z) - Pose-aware Attention Network for Flexible Motion Retargeting by Body
Part [17.637846838499737]
Motion is a fundamental problem in computer graphics and computer vision.
Existing approaches usually have many strict requirements.
We propose a novel, flexible motion framework.
arXiv Detail & Related papers (2023-06-13T08:49:29Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Hierarchical Neural Implicit Pose Network for Animation and Motion
Retargeting [66.69067601079706]
HIPNet is a neural implicit pose network trained on multiple subjects across many poses.
We employ a hierarchical skeleton-based representation to learn a signed distance function on a canonical unposed space.
We achieve state-of-the-art results on various single-subject and multi-subject benchmarks.
arXiv Detail & Related papers (2021-12-02T03:25:46Z) - Skeleton-Contrastive 3D Action Representation Learning [35.06361753065124]
This paper strives for self-supervised learning of a feature space suitable for skeleton-based action recognition.
Our approach achieves state-of-the-art performance for self-supervised learning from skeleton data on the challenging PKU and NTU datasets.
arXiv Detail & Related papers (2021-08-08T14:44:59Z) - MotioNet: 3D Human Motion Reconstruction from Monocular Video with
Skeleton Consistency [72.82534577726334]
We introduce MotioNet, a deep neural network that directly reconstructs the motion of a 3D human skeleton from monocular video.
Our method is the first data-driven approach that directly outputs a kinematic skeleton, which is a complete, commonly used, motion representation.
arXiv Detail & Related papers (2020-06-22T08:50:09Z) - Image Co-skeletonization via Co-segmentation [102.59781674888657]
We propose a new joint processing topic: image co-skeletonization.
Object skeletonization in a single natural image is a challenging problem because there is hardly any prior knowledge about the object.
We propose a coupled framework for co-skeletonization and co-segmentation tasks so that they are well informed by each other.
arXiv Detail & Related papers (2020-04-12T09:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.