Generative Action Description Prompts for Skeleton-based Action
Recognition
- URL: http://arxiv.org/abs/2208.05318v2
- Date: Wed, 6 Sep 2023 02:29:01 GMT
- Title: Generative Action Description Prompts for Skeleton-based Action
Recognition
- Authors: Wangmeng Xiang, Chao Li, Yuxuan Zhou, Biao Wang, Lei Zhang
- Abstract summary: We propose a Generative Action-description Prompts (GAP) approach for skeleton-based action recognition.
We employ a pre-trained large-scale language model as the knowledge engine to automatically generate text descriptions for body parts movements of actions.
Our proposed GAP method achieves noticeable improvements over various baseline models without extra cost at inference.
- Score: 15.38417530693649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Skeleton-based action recognition has recently received considerable
attention. Current approaches to skeleton-based action recognition are
typically formulated as one-hot classification tasks and do not fully exploit
the semantic relations between actions. For example, "make victory sign" and
"thumb up" are two actions of hand gestures, whose major difference lies in the
movement of hands. This information is agnostic from the categorical one-hot
encoding of action classes but could be unveiled from the action description.
Therefore, utilizing action description in training could potentially benefit
representation learning. In this work, we propose a Generative
Action-description Prompts (GAP) approach for skeleton-based action
recognition. More specifically, we employ a pre-trained large-scale language
model as the knowledge engine to automatically generate text descriptions for
body parts movements of actions, and propose a multi-modal training scheme by
utilizing the text encoder to generate feature vectors for different body parts
and supervise the skeleton encoder for action representation learning.
Experiments show that our proposed GAP method achieves noticeable improvements
over various baseline models without extra computation cost at inference. GAP
achieves new state-of-the-arts on popular skeleton-based action recognition
benchmarks, including NTU RGB+D, NTU RGB+D 120 and NW-UCLA. The source code is
available at https://github.com/MartinXM/GAP.
Related papers
- Spatio-Temporal Context Prompting for Zero-Shot Action Detection [13.22912547389941]
We propose a method which can effectively leverage the rich knowledge of visual-language models to perform Person-Context Interaction.
To address the challenge of recognizing distinct actions by multiple people at the same timestamp, we design the Interest Token Spotting mechanism.
Our method achieves superior results compared to previous approaches and can be further extended to multi-action videos.
arXiv Detail & Related papers (2024-08-28T17:59:05Z) - An Information Compensation Framework for Zero-Shot Skeleton-based Action Recognition [49.45660055499103]
Zero-shot human skeleton-based action recognition aims to construct a model that can recognize actions outside the categories seen during training.
Previous research has focused on aligning sequences' visual and semantic spatial distributions.
We introduce a new loss function sampling method to obtain a tight and robust representation.
arXiv Detail & Related papers (2024-06-02T06:53:01Z) - Early Action Recognition with Action Prototypes [62.826125870298306]
We propose a novel model that learns a prototypical representation of the full action for each class.
We decompose the video into short clips, where a visual encoder extracts features from each clip independently.
Later, a decoder aggregates together in an online fashion features from all the clips for the final class prediction.
arXiv Detail & Related papers (2023-12-11T18:31:13Z) - Multi-Semantic Fusion Model for Generalized Zero-Shot Skeleton-Based
Action Recognition [32.291333054680855]
Generalized zero-shot skeleton-based action recognition (GZSSAR) is a new challenging problem in computer vision community.
We propose a multi-semantic fusion (MSF) model for improving the performance of GZSSAR.
arXiv Detail & Related papers (2023-09-18T09:00:25Z) - Bridge-Prompt: Towards Ordinal Action Understanding in Instructional
Videos [92.18898962396042]
We propose a prompt-based framework, Bridge-Prompt, to model the semantics across adjacent actions.
We reformulate the individual action labels as integrated text prompts for supervision, which bridge the gap between individual action semantics.
Br-Prompt achieves state-of-the-art on multiple benchmarks.
arXiv Detail & Related papers (2022-03-26T15:52:27Z) - ActionCLIP: A New Paradigm for Video Action Recognition [14.961103794667341]
We provide a new perspective on action recognition by attaching importance to the semantic information of label texts.
We propose a new paradigm based on this multimodal learning framework for action recognition, which we dub "pre-train, prompt and fine-tune"
arXiv Detail & Related papers (2021-09-17T11:21:34Z) - All About Knowledge Graphs for Actions [82.39684757372075]
We propose a better understanding of knowledge graphs (KGs) that can be utilized for zero-shot and few-shot action recognition.
We study three different construction mechanisms for KGs: action embeddings, action-object embeddings, visual embeddings.
We present extensive analysis of the impact of different KGs on different experimental setups.
arXiv Detail & Related papers (2020-08-28T01:44:01Z) - Augmented Skeleton Based Contrastive Action Learning with Momentum LSTM
for Unsupervised Action Recognition [16.22360992454675]
Action recognition via 3D skeleton data is an emerging important topic in these years.
In this paper, we for the first time propose a contrastive action learning paradigm named AS-CAL.
Our approach typically improves existing hand-crafted methods by 10-50% top-1 accuracy.
arXiv Detail & Related papers (2020-08-01T06:37:57Z) - Intra- and Inter-Action Understanding via Temporal Action Parsing [118.32912239230272]
We construct a new dataset developed on sport videos with manual annotations of sub-actions, and conduct a study on temporal action parsing on top.
Our study shows that a sport activity usually consists of multiple sub-actions and that the awareness of such temporal structures is beneficial to action recognition.
We also investigate a number of temporal parsing methods, and thereon devise an improved method that is capable of mining sub-actions from training data without knowing the labels of them.
arXiv Detail & Related papers (2020-05-20T17:45:18Z) - FineGym: A Hierarchical Video Dataset for Fine-grained Action
Understanding [118.32912239230272]
FineGym is a new action recognition dataset built on top of gymnastic videos.
It provides temporal annotations at both action and sub-action levels with a three-level semantic hierarchy.
This new level of granularity presents significant challenges for action recognition.
arXiv Detail & Related papers (2020-04-14T17:55:21Z) - Learning Spatiotemporal Features via Video and Text Pair Discrimination [30.64670449131973]
Cross-modal pair (CPD) framework captures correlation between video and its associated text.
We train our CPD models on both standard video dataset (Kinetics-210k) and uncurated web video dataset (-300k) to demonstrate its effectiveness.
arXiv Detail & Related papers (2020-01-16T08:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.