KShapeNet: Riemannian network on Kendall shape space for Skeleton based
Action Recognition
- URL: http://arxiv.org/abs/2011.12004v1
- Date: Tue, 24 Nov 2020 10:14:07 GMT
- Title: KShapeNet: Riemannian network on Kendall shape space for Skeleton based
Action Recognition
- Authors: Racha Friji, Hassen Drira, Faten Chaieb, Sebastian Kurtek, Hamza Kchok
- Abstract summary: We propose a geometry aware deep learning approach for skeleton-based action recognition.
Skeletons are first modeled as trajectories on Kendall's shape space and then mapped to the linear tangent space.
The resulting structured data are then fed to a deep learning architecture, which includes a layer that optimize over rigid and non rigid transformations.
- Score: 7.183483982542308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Learning architectures, albeit successful in most computer vision tasks,
were designed for data with an underlying Euclidean structure, which is not
usually fulfilled since pre-processed data may lie on a non-linear space. In
this paper, we propose a geometry aware deep learning approach for
skeleton-based action recognition. Skeleton sequences are first modeled as
trajectories on Kendall's shape space and then mapped to the linear tangent
space. The resulting structured data are then fed to a deep learning
architecture, which includes a layer that optimizes over rigid and non rigid
transformations of the 3D skeletons, followed by a CNN-LSTM network. The
assessment on two large scale skeleton datasets, namely NTU-RGB+D and NTU-RGB+D
120, has proven that proposed approach outperforms existing geometric deep
learning methods and is competitive with respect to recently published
approaches.
Related papers
- SkeletonMAE: Graph-based Masked Autoencoder for Skeleton Sequence
Pre-training [110.55093254677638]
We propose an efficient skeleton sequence learning framework, named Skeleton Sequence Learning (SSL)
In this paper, we build an asymmetric graph-based encoder-decoder pre-training architecture named SkeletonMAE.
Our SSL generalizes well across different datasets and outperforms the state-of-the-art self-supervised skeleton-based action recognition methods.
arXiv Detail & Related papers (2023-07-17T13:33:11Z) - Exploring Data Geometry for Continual Learning [64.4358878435983]
We study continual learning from a novel perspective by exploring data geometry for the non-stationary stream of data.
Our method dynamically expands the geometry of the underlying space to match growing geometric structures induced by new data.
Experiments show that our method achieves better performance than baseline methods designed in Euclidean space.
arXiv Detail & Related papers (2023-04-08T06:35:25Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Depth Completion using Geometry-Aware Embedding [22.333381291860498]
This paper proposes an efficient method to learn geometry-aware embedding.
It encodes the local and global geometric structure information from 3D points, e.g., scene layout, object's sizes and shapes, to guide dense depth estimation.
arXiv Detail & Related papers (2022-03-21T12:06:27Z) - SkeletonNet: A Topology-Preserving Solution for Learning Mesh
Reconstruction of Object Surfaces from RGB Images [85.66560542483286]
This paper focuses on the challenging task of learning 3D object surface reconstructions from RGB images.
We propose two models, the Skeleton-Based GraphConvolutional Neural Network (SkeGCNN) and the Skeleton-Regularized Deep Implicit Surface Network (SkeDISN)
We conduct thorough experiments that verify the efficacy of our proposed SkeletonNet.
arXiv Detail & Related papers (2020-08-13T07:59:25Z) - Mix Dimension in Poincar\'{e} Geometry for 3D Skeleton-based Action
Recognition [57.98278794950759]
Graph Convolutional Networks (GCNs) have already demonstrated their powerful ability to model the irregular data.
We present a novel spatial-temporal GCN architecture which is defined via the Poincar'e geometry.
We evaluate our method on two current largest scale 3D datasets.
arXiv Detail & Related papers (2020-07-30T18:23:18Z) - Data-driven effective model shows a liquid-like deep learning [2.0711789781518752]
It remains unknown what the landscape looks like for deep networks of binary synapses.
We propose a statistical mechanics framework by directly building a least structured model of the high-dimensional weight space.
Our data-driven model thus provides a statistical mechanics insight about why deep learning is unreasonably effective in terms of the high-dimensional weight space.
arXiv Detail & Related papers (2020-07-16T04:02:48Z) - Deep Manifold Prior [37.725563645899584]
We present a prior for manifold structured data, such as surfaces of 3D shapes, where deep neural networks are adopted to reconstruct a target shape using gradient descent.
We show that surfaces generated this way are smooth, with limiting behavior characterized by Gaussian processes, and we mathematically derive such properties for fully-connected as well as convolutional networks.
arXiv Detail & Related papers (2020-04-08T20:47:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.