REGINA - Reasoning Graph Convolutional Networks in Human Action
Recognition
- URL: http://arxiv.org/abs/2105.06711v1
- Date: Fri, 14 May 2021 08:46:42 GMT
- Title: REGINA - Reasoning Graph Convolutional Networks in Human Action
Recognition
- Authors: Bruno Degardin, Vasco Lopes and Hugo Proen\c{c}a
- Abstract summary: This paper describes a novel way to REasoning Graph convolutional networks IN Human Action recognition.
The proposed strategy can be easily integrated in the existing GCN-based methods.
- Score: 1.2891210250935146
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: It is known that the kinematics of the human body skeleton reveals valuable
information in action recognition. Recently, modeling skeletons as
spatio-temporal graphs with Graph Convolutional Networks (GCNs) has been
reported to solidly advance the state-of-the-art performance. However,
GCN-based approaches exclusively learn from raw skeleton data, and are expected
to extract the inherent structural information on their own. This paper
describes REGINA, introducing a novel way to REasoning Graph convolutional
networks IN Human Action recognition. The rationale is to provide to the GCNs
additional knowledge about the skeleton data, obtained by handcrafted features,
in order to facilitate the learning process, while guaranteeing that it remains
fully trainable in an end-to-end manner. The challenge is to capture
complementary information over the dynamics between consecutive frames, which
is the key information extracted by state-of-the-art GCN techniques. Moreover,
the proposed strategy can be easily integrated in the existing GCN-based
methods, which we also regard positively. Our experiments were carried out in
well known action recognition datasets and enabled to conclude that REGINA
contributes for solid improvements in performance when incorporated to other
GCN-based approaches, without any other adjustment regarding the original
method. For reproducibility, the REGINA code and all the experiments carried
out will be publicly available at https://github.com/DegardinBruno.
Related papers
- DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Pose-Guided Graph Convolutional Networks for Skeleton-Based Action
Recognition [32.07659338674024]
Graph convolutional networks (GCNs) can model the human body skeletons as spatial and temporal graphs.
In this work, we propose pose-guided GCN (PG-GCN), a multi-modal framework for high-performance human action recognition.
The core idea of this module is to utilize a trainable graph to aggregate features from the skeleton stream with that of the pose stream, which leads to a network with more robust feature representation ability.
arXiv Detail & Related papers (2022-10-10T02:08:49Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Robust Knowledge Adaptation for Dynamic Graph Neural Networks [61.8505228728726]
We propose Ada-DyGNN: a robust knowledge Adaptation framework via reinforcement learning for Dynamic Graph Neural Networks.
Our approach constitutes the first attempt to explore robust knowledge adaptation via reinforcement learning.
Experiments on three benchmark datasets demonstrate that Ada-DyGNN achieves the state-of-the-art performance.
arXiv Detail & Related papers (2022-07-22T02:06:53Z) - Joint-bone Fusion Graph Convolutional Network for Semi-supervised
Skeleton Action Recognition [65.78703941973183]
We propose a novel correlation-driven joint-bone fusion graph convolutional network (CD-JBF-GCN) as an encoder and use a pose prediction head as a decoder.
Specifically, the CD-JBF-GC can explore the motion transmission between the joint stream and the bone stream.
The pose prediction based auto-encoder in the self-supervised training stage allows the network to learn motion representation from unlabeled data.
arXiv Detail & Related papers (2022-02-08T16:03:15Z) - UNIK: A Unified Framework for Real-world Skeleton-based Action
Recognition [11.81043814295441]
We introduce UNIK, a novel skeleton-based action recognition method that is able to generalize across datasets.
To study the cross-domain generalizability of action recognition in real-world videos, we re-evaluate state-of-the-art approaches as well as the proposed UNIK.
Results show that the proposed UNIK, with pre-training on Posetics, generalizes well and outperforms state-of-the-art when transferred onto four target action classification datasets.
arXiv Detail & Related papers (2021-07-19T02:00:28Z) - Skeleton-based Hand-Gesture Recognition with Lightweight Graph
Convolutional Networks [14.924672048447338]
Graph convolutional networks (GCNs) aim at extending deep learning to arbitrary irregular domains, such as graphs.
We introduce a novel method that learns the topology of input graphs as a part of GCN design.
Experiments conducted on the challenging task of skeleton-based hand-gesture recognition show the high effectiveness of the learned GCNs.
arXiv Detail & Related papers (2021-04-09T09:06:53Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Knowledge Embedding Based Graph Convolutional Network [35.35776808660919]
This paper proposes a novel framework, namely the Knowledge Embedding based Graph Convolutional Network (KE-GCN)
KE-GCN combines the power of Graph Convolutional Network (GCN) in graph-based belief propagation and the strengths of advanced knowledge embedding methods.
Our theoretical analysis shows that KE-GCN offers an elegant unification of several well-known GCN methods as specific cases.
arXiv Detail & Related papers (2020-06-12T17:12:51Z) - Distilling Knowledge from Graph Convolutional Networks [146.71503336770886]
Existing knowledge distillation methods focus on convolutional neural networks (CNNs)
We propose the first dedicated approach to distilling knowledge from a pre-trained graph convolutional network (GCN) model.
We show that our method achieves the state-of-the-art knowledge distillation performance for GCN models.
arXiv Detail & Related papers (2020-03-23T18:23:11Z) - Feedback Graph Convolutional Network for Skeleton-based Action
Recognition [38.782491442635205]
We propose a novel network, named Feedback Graph Convolutional Network (FGCN)
This is the first work that introduces the feedback mechanism into GCNs and action recognition.
It achieves the state-of-the-art performance on three datasets.
arXiv Detail & Related papers (2020-03-17T07:20:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.