High-Performance Inference Graph Convolutional Networks for Skeleton-Based Action Recognition
- URL: http://arxiv.org/abs/2305.18710v2
- Date: Tue, 18 Jun 2024 09:50:21 GMT
- Title: High-Performance Inference Graph Convolutional Networks for Skeleton-Based Action Recognition
- Authors: Ziao Li, Junyi Wang, Bangli Liu, Haibin Cai, Mohamad Saada, Qinggang Meng,
- Abstract summary: We propose two novel high-performance inference GCNs, namely HPI-GCN-RP and HPI-GCN-OP.
Our HPI-GCN-OP achieves an accuracy of 93% on the cross-subject split of the NTU-RGB+D 60 dataset, and 90.1% on the NTU-RGB+D 120 dataset.
- Score: 6.728040264083982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the significant achievements have been made in skeleton-based human action recognition with the emergence of graph convolutional networks (GCNs). However, the state-of-the-art (SOTA) models used for this task focus on constructing more complex higher-order connections between joint nodes to describe skeleton information, which leads to complex inference processes and high computational costs. To address the slow inference speed caused by overly complex model structures, we introduce re-parameterization and over-parameterization techniques to GCNs and propose two novel high-performance inference GCNs, namely HPI-GCN-RP and HPI-GCN-OP. After the completion of model training, model parameters are fixed. HPI-GCN-RP adopts re-parameterization technique to transform high-performance training model into fast inference model through linear transformations, which achieves a higher inference speed with competitive model performance. HPI-GCN-OP further utilizes over-parameterization technique to achieve higher performance improvement by introducing additional inference parameters, albeit with slightly decreased inference speed. The experimental results on the two skeleton-based action recognition datasets demonstrate the effectiveness of our approach. Our HPI-GCN-OP achieves performance comparable to the current SOTA models, with inference speeds five times faster. Specifically, our HPI-GCN-OP achieves an accuracy of 93\% on the cross-subject split of the NTU-RGB+D 60 dataset, and 90.1\% on the cross-subject benchmark of the NTU-RGB+D 120 dataset. Code is available at github.com/lizaowo/HPI-GCN.
Related papers
- Topological Symmetry Enhanced Graph Convolution for Skeleton-Based Action Recognition [11.05325139231301]
Skeleton-based action recognition has achieved remarkable performance with the development of graph convolutional networks (GCNs)
We propose a novel Topological Symmetry Enhanced Graph Convolution (TSE-GC) to enable distinct topology learning across different channel partitions.
We also construct a Multi-Branch Deformable Temporal Convolution (MBDTC) for skeleton-based action recognition.
arXiv Detail & Related papers (2024-11-19T15:23:59Z) - Flattened Graph Convolutional Networks For Recommendation [18.198536511983452]
This paper proposes the flattened GCN(FlatGCN) model, which is able to achieve superior performance with remarkably less complexity compared with existing models.
First, we propose a simplified but powerful GCN architecture which aggregates the neighborhood information using one flattened GCN layer.
Second, we propose an informative neighbor-infomax sampling method to select the most valuable neighbors by measuring the correlation among neighboring nodes.
Third, we propose a layer ensemble technique which improves the expressiveness of the learned representations by assembling the layer-wise neighborhood representations at the final layer.
arXiv Detail & Related papers (2022-09-25T12:53:50Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Continual Spatio-Temporal Graph Convolutional Networks [87.86552250152872]
We reformulating the Spatio-Temporal Graph Convolutional Neural Network as a Continual Inference Network.
We observe up to 109x reduction in time complexity, on- hardware accelerations of 26x, and reductions in maximum allocated memory of 52% during online inference.
arXiv Detail & Related papers (2022-03-21T14:23:18Z) - Parameterized Hypercomplex Graph Neural Networks for Graph
Classification [1.1852406625172216]
We develop graph neural networks that leverage the properties of hypercomplex feature transformation.
In particular, in our proposed class of models, the multiplication rule specifying the algebra itself is inferred from the data during training.
We test our proposed hypercomplex GNN on several open graph benchmark datasets and show that our models reach state-of-the-art performance.
arXiv Detail & Related papers (2021-03-30T18:01:06Z) - On the spatial attention in Spatio-Temporal Graph Convolutional Networks
for skeleton-based human action recognition [97.14064057840089]
Graphal networks (GCNs) promising performance in skeleton-based human action recognition by modeling a sequence of skeletons as a graph.
Most of the recently proposed G-temporal-based methods improve the performance by learning the graph structure at each layer of the network.
arXiv Detail & Related papers (2020-11-07T19:03:04Z) - Temporal Attention-Augmented Graph Convolutional Network for Efficient
Skeleton-Based Human Action Recognition [97.14064057840089]
Graphal networks (GCNs) have been very successful in modeling non-Euclidean data structures.
Most GCN-based action recognition methods use deep feed-forward networks with high computational complexity to process all skeletons in an action.
We propose a temporal attention module (TAM) for increasing the efficiency in skeleton-based action recognition.
arXiv Detail & Related papers (2020-10-23T08:01:55Z) - Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text
Generation [56.73834525802723]
Lightweight Dynamic Graph Convolutional Networks (LDGCNs) are proposed.
LDGCNs capture richer non-local interactions by synthesizing higher order information from the input graphs.
We develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity.
arXiv Detail & Related papers (2020-10-09T06:03:46Z) - Hyperparameter Optimization in Neural Networks via Structured Sparse
Recovery [54.60327265077322]
We study two important problems in the automated design of neural networks through the lens of sparse recovery methods.
In the first part of this paper, we establish a novel connection between HPO and structured sparse recovery.
In the second part of this paper, we establish a connection between NAS and structured sparse recovery.
arXiv Detail & Related papers (2020-07-07T00:57:09Z) - Single-Layer Graph Convolutional Networks For Recommendation [17.3621098912528]
Graph Convolutional Networks (GCNs) have received significant attention and achieved start-of-the-art performances on recommendation tasks.
Existing GCN models tend to perform recursion aggregations among all related nodes, which arises severe computational burden.
We propose a single GCN layer to aggregate information from the neighbors filtered by DA similarity and then generates the node representations.
arXiv Detail & Related papers (2020-06-07T14:38:47Z) - Feedback Graph Convolutional Network for Skeleton-based Action
Recognition [38.782491442635205]
We propose a novel network, named Feedback Graph Convolutional Network (FGCN)
This is the first work that introduces the feedback mechanism into GCNs and action recognition.
It achieves the state-of-the-art performance on three datasets.
arXiv Detail & Related papers (2020-03-17T07:20:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.