Efficient Video Action Detection with Token Dropout and Context
Refinement
- URL: http://arxiv.org/abs/2304.08451v3
- Date: Mon, 28 Aug 2023 10:22:23 GMT
- Title: Efficient Video Action Detection with Token Dropout and Context
Refinement
- Authors: Lei Chen, Zhan Tong, Yibing Song, Gangshan Wu, Limin Wang
- Abstract summary: We propose an end-to-end framework for efficient video action detection (ViTs)
In a video clip, we maintain tokens from its perspective while preserving tokens relevant to actor motions from other frames.
Second, we refine scene context by leveraging remaining tokens for better recognizing actor identities.
- Score: 67.10895416008911
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Streaming video clips with large-scale video tokens impede vision
transformers (ViTs) for efficient recognition, especially in video action
detection where sufficient spatiotemporal representations are required for
precise actor identification. In this work, we propose an end-to-end framework
for efficient video action detection (EVAD) based on vanilla ViTs. Our EVAD
consists of two specialized designs for video action detection. First, we
propose a spatiotemporal token dropout from a keyframe-centric perspective. In
a video clip, we maintain all tokens from its keyframe, preserve tokens
relevant to actor motions from other frames, and drop out the remaining tokens
in this clip. Second, we refine scene context by leveraging remaining tokens
for better recognizing actor identities. The region of interest (RoI) in our
action detector is expanded into temporal domain. The captured spatiotemporal
actor identity representations are refined via scene context in a decoder with
the attention mechanism. These two designs make our EVAD efficient while
maintaining accuracy, which is validated on three benchmark datasets (i.e.,
AVA, UCF101-24, JHMDB). Compared to the vanilla ViT backbone, our EVAD reduces
the overall GFLOPs by 43% and improves real-time inference speed by 40% with no
performance degradation. Moreover, even at similar computational costs, our
EVAD can improve the performance by 1.1 mAP with higher resolution inputs. Code
is available at https://github.com/MCG-NJU/EVAD.
Related papers
- SITAR: Semi-supervised Image Transformer for Action Recognition [20.609596080624662]
This paper addresses video action recognition in a semi-supervised setting by leveraging only a handful of labeled videos.
We capitalize on the vast pool of unlabeled samples and employ contrastive learning on the encoded super images.
Our method demonstrates superior performance compared to existing state-of-the-art approaches for semi-supervised action recognition.
arXiv Detail & Related papers (2024-09-04T17:49:54Z) - Hourglass Tokenizer for Efficient Transformer-Based 3D Human Pose Estimation [73.31524865643709]
We present a plug-and-play pruning-and-recovering framework, called Hourglass Tokenizer (HoT), for efficient transformer-based 3D pose estimation from videos.
Our HoDT begins with pruning pose tokens of redundant frames and ends with recovering full-length tokens, resulting in a few pose tokens in the intermediate transformer blocks.
Our method can achieve both high efficiency and estimation accuracy compared to the original VPT models.
arXiv Detail & Related papers (2023-11-20T18:59:51Z) - AiluRus: A Scalable ViT Framework for Dense Prediction [95.1313839257891]
Vision transformers (ViTs) have emerged as a prevalent architecture for vision tasks owing to their impressive performance.
We propose to apply adaptive resolution for different regions in the image according to their importance.
We evaluate our proposed method on three different datasets and observe promising performance.
arXiv Detail & Related papers (2023-11-02T12:48:43Z) - How can objects help action recognition? [74.29564964727813]
We investigate how we can use knowledge of objects to design better video models.
First, we propose an object-guided token sampling strategy that enables us to retain a small fraction of the input tokens.
Second, we propose an object-aware attention module that enriches our feature representation with object information.
arXiv Detail & Related papers (2023-06-20T17:56:16Z) - DOAD: Decoupled One Stage Action Detection Network [77.14883592642782]
Localizing people and recognizing their actions from videos is a challenging task towards high-level video understanding.
Existing methods are mostly two-stage based, with one stage for person bounding box generation and the other stage for action recognition.
We present a decoupled one-stage network dubbed DOAD, to improve the efficiency for-temporal action detection.
arXiv Detail & Related papers (2023-04-01T08:06:43Z) - MAR: Masked Autoencoders for Efficient Action Recognition [46.10824456139004]
Vision Transformers (ViT) can complement between contexts given only limited visual contents.
Mar reduces redundant by discarding a proportion of patches and operating only on a part of videos.
Mar consistently outperforms existing ViT models with a notable margin.
arXiv Detail & Related papers (2022-07-24T04:27:36Z) - Efficient Video Transformers with Spatial-Temporal Token Selection [68.27784654734396]
We present STTS, a token selection framework that dynamically selects a few informative tokens in both temporal and spatial dimensions conditioned on input video samples.
Our framework achieves similar results while requiring 20% less computation.
arXiv Detail & Related papers (2021-11-23T00:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.