JOLO-GCN: Mining Joint-Centered Light-Weight Information for
Skeleton-Based Action Recognition
- URL: http://arxiv.org/abs/2011.07787v1
- Date: Mon, 16 Nov 2020 08:39:22 GMT
- Title: JOLO-GCN: Mining Joint-Centered Light-Weight Information for
Skeleton-Based Action Recognition
- Authors: Jinmiao Cai, Nianjuan Jiang, Xiaoguang Han, Kui Jia, Jiangbo Lu
- Abstract summary: We propose a novel framework for employing human pose skeleton and joint-centered light-weight information jointly in a two-stream graph convolutional network.
Compared to the pure skeleton-based baseline, this hybrid scheme effectively boosts performance, while keeping the computational and memory overheads low.
- Score: 47.47099206295254
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Skeleton-based action recognition has attracted research attentions in recent
years. One common drawback in currently popular skeleton-based human action
recognition methods is that the sparse skeleton information alone is not
sufficient to fully characterize human motion. This limitation makes several
existing methods incapable of correctly classifying action categories which
exhibit only subtle motion differences. In this paper, we propose a novel
framework for employing human pose skeleton and joint-centered light-weight
information jointly in a two-stream graph convolutional network, namely,
JOLO-GCN. Specifically, we use Joint-aligned optical Flow Patches (JFP) to
capture the local subtle motion around each joint as the pivotal joint-centered
visual information. Compared to the pure skeleton-based baseline, this hybrid
scheme effectively boosts performance, while keeping the computational and
memory overheads low. Experiments on the NTU RGB+D, NTU RGB+D 120, and the
Kinetics-Skeleton dataset demonstrate clear accuracy improvements attained by
the proposed method over the state-of-the-art skeleton-based methods.
Related papers
- Joint Temporal Pooling for Improving Skeleton-based Action Recognition [4.891381363264954]
In skeleton-based human action recognition, temporal pooling is a critical step for capturing relationship of joint dynamics.
This paper presents a novel Adaptive Joint Motion Temporal Pooling (MAP) method for improving skeleton-based action recognition.
The efficacy of JMAP has been validated through experiments on the popular NTU RGBD+ 120 and PKU-MMD datasets.
arXiv Detail & Related papers (2024-08-18T04:40:16Z) - SkateFormer: Skeletal-Temporal Transformer for Human Action Recognition [25.341177384559174]
We propose a novel approach called Skeletal-Temporal Transformer (SkateFormer)
SkateFormer partitions joints and frames based on different types of skeletal-temporal relation.
It can selectively focus on key joints and frames crucial for action recognition in an action-adaptive manner.
arXiv Detail & Related papers (2024-03-14T15:55:53Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Learning from Temporal Spatial Cubism for Cross-Dataset Skeleton-based
Action Recognition [88.34182299496074]
Action labels are only available on a source dataset, but unavailable on a target dataset in the training stage.
We utilize a self-supervision scheme to reduce the domain shift between two skeleton-based action datasets.
By segmenting and permuting temporal segments or human body parts, we design two self-supervised learning classification tasks.
arXiv Detail & Related papers (2022-07-17T07:05:39Z) - Skeleton-based Action Recognition via Adaptive Cross-Form Learning [75.92422282666767]
Skeleton-based action recognition aims to project skeleton sequences to action categories, where sequences are derived from multiple forms of pre-detected points.
Existing methods tend to improve GCNs by leveraging multi-form skeletons due to their complementary cues.
We present Adaptive Cross-Form Learning (ACFL), which empowers well-designed GCNs to generate complementary representation from single-form skeletons.
arXiv Detail & Related papers (2022-06-30T07:40:03Z) - Joint-bone Fusion Graph Convolutional Network for Semi-supervised
Skeleton Action Recognition [65.78703941973183]
We propose a novel correlation-driven joint-bone fusion graph convolutional network (CD-JBF-GCN) as an encoder and use a pose prediction head as a decoder.
Specifically, the CD-JBF-GC can explore the motion transmission between the joint stream and the bone stream.
The pose prediction based auto-encoder in the self-supervised training stage allows the network to learn motion representation from unlabeled data.
arXiv Detail & Related papers (2022-02-08T16:03:15Z) - Revisiting Skeleton-based Action Recognition [107.08112310075114]
PoseC3D is a new approach to skeleton-based action recognition, which relies on a 3D heatmap instead stack a graph sequence as the base representation of human skeletons.
On four challenging datasets, PoseC3D consistently obtains superior performance, when used alone on skeletons and in combination with the RGB modality.
arXiv Detail & Related papers (2021-04-28T06:32:17Z) - Richly Activated Graph Convolutional Network for Robust Skeleton-based
Action Recognition [22.90127409366107]
A graph convolutional network (GCN) is proposed to explore sufficient discriminative features spreading over all skeleton joints.
The RA-GCN achieves comparable performance on the standard NTU RGB+D 60 and 120 datasets.
arXiv Detail & Related papers (2020-08-09T19:06:29Z) - Predictively Encoded Graph Convolutional Network for Noise-Robust
Skeleton-based Action Recognition [6.729108277517129]
We propose a skeleton-based action recognition method which is robust to noise information of given skeleton features.
Our approach achieves outstanding performance when skeleton samples are noised compared with existing state-of-the-art methods.
arXiv Detail & Related papers (2020-03-17T03:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.