CycleACR: Cycle Modeling of Actor-Context Relations for Video Action
Detection
- URL: http://arxiv.org/abs/2303.16118v1
- Date: Tue, 28 Mar 2023 16:40:47 GMT
- Title: CycleACR: Cycle Modeling of Actor-Context Relations for Video Action
Detection
- Authors: Lei Chen, Zhan Tong, Yibing Song, Gangshan Wu, Limin Wang
- Abstract summary: We propose to select actor-related scene context, rather than directly leverage raw video scenario, to improve relation modeling.
We develop a Cycle Actor-Context Relation network (CycleACR) where there is a symmetric graph that models the actor and context relations in a bidirectional form.
Compared to existing designs that focus on C2A-E, our CycleACR introduces A2C-R for a more effective relation modeling.
- Score: 67.90338302559672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The relation modeling between actors and scene context advances video action
detection where the correlation of multiple actors makes their action
recognition challenging. Existing studies model each actor and scene relation
to improve action recognition. However, the scene variations and background
interference limit the effectiveness of this relation modeling. In this paper,
we propose to select actor-related scene context, rather than directly leverage
raw video scenario, to improve relation modeling. We develop a Cycle
Actor-Context Relation network (CycleACR) where there is a symmetric graph that
models the actor and context relations in a bidirectional form. Our CycleACR
consists of the Actor-to-Context Reorganization (A2C-R) that collects actor
features for context feature reorganizations, and the Context-to-Actor
Enhancement (C2A-E) that dynamically utilizes reorganized context features for
actor feature enhancement. Compared to existing designs that focus on C2A-E,
our CycleACR introduces A2C-R for a more effective relation modeling. This
modeling advances our CycleACR to achieve state-of-the-art performance on two
popular action detection datasets (i.e., AVA and UCF101-24). We also provide
ablation studies and visualizations as well to show how our cycle actor-context
relation modeling improves video action detection. Code is available at
https://github.com/MCG-NJU/CycleACR.
Related papers
- JARViS: Detecting Actions in Video Using Unified Actor-Scene Context Relation Modeling [8.463489896549161]
Two-stage Video localization (VAD) is a formidable task that involves the localization and classification of actions within the spatial and temporal dimensions of a video clip.
We propose a two-stage VAD framework called Joint Actor-scene context Relation modeling (JARViS)
JARViS consolidates cross-modal action semantics distributed globally across spatial and temporal dimensions using Transformer attention.
arXiv Detail & Related papers (2024-08-07T08:08:08Z) - MRSN: Multi-Relation Support Network for Video Action Detection [15.82531313330869]
Action detection is a challenging video understanding task requiring modeling relations.
We propose a novel network called Multi-temporallation Supportarity Network.
Our experiments demonstrate that modeling relations separately and performing relation-level interactions can achieve state-of-the-art results.
arXiv Detail & Related papers (2023-04-24T10:15:31Z) - Graph Convolutional Module for Temporal Action Localization in Videos [142.5947904572949]
We claim that the relations between action units play an important role in action localization.
A more powerful action detector should not only capture the local content of each action unit but also allow a wider field of view on the context related to it.
We propose a general graph convolutional module (GCM) that can be easily plugged into existing action localization methods.
arXiv Detail & Related papers (2021-12-01T06:36:59Z) - Spot What Matters: Learning Context Using Graph Convolutional Networks
for Weakly-Supervised Action Detection [0.0]
We introduce an architecture based on self-attention and Convolutional Networks to improve human action detection in video.
Our model aids explainability by visualizing the learned context as an attention map, even for actions and objects unseen during training.
Experimental results show that our contextualized approach outperforms a baseline action detection approach by more than 2 points in Video-mAP.
arXiv Detail & Related papers (2021-07-28T21:37:18Z) - Unified Graph Structured Models for Video Understanding [93.72081456202672]
We propose a message passing graph neural network that explicitly models relational-temporal relations.
We show how our method is able to more effectively model relationships between relevant entities in the scene.
arXiv Detail & Related papers (2021-03-29T14:37:35Z) - Efficient Spatialtemporal Context Modeling for Action Recognition [42.30158166919919]
We propose a recurrent 3D criss-cross attention (RCCA-3D) module to model the dense long-range contextual information video for action recognition.
We model the relationship between points in the same line along the direction of horizon, vertical and depth at each time, which forms a 3D criss-cross structure.
Compared with the non-local method, the proposed RCCA-3D module reduces the number of parameters and FLOPs by 25% and 11% for the video context modeling.
arXiv Detail & Related papers (2021-03-20T14:48:12Z) - Learning Asynchronous and Sparse Human-Object Interaction in Videos [56.73059840294019]
Asynchronous-Sparse Interaction Graph Networks (ASSIGN) is able to automatically detect the structure of interaction events associated with entities in a video scene.
ASSIGN is tested on human-object interaction recognition and shows superior performance in segmenting and labeling of human sub-activities and object affordances from raw videos.
arXiv Detail & Related papers (2021-03-03T23:43:55Z) - Context-Aware RCNN: A Baseline for Action Detection in Videos [66.16989365280938]
We first empirically find the recognition accuracy is highly correlated with the bounding box size of an actor.
We revisit RCNN for actor-centric action recognition via cropping and resizing image patches around actors.
We found that expanding actor bounding boxes slightly and fusing the context features can further boost the performance.
arXiv Detail & Related papers (2020-07-20T03:11:48Z) - Actor-Context-Actor Relation Network for Spatio-Temporal Action
Localization [47.61419011906561]
ACAR-Net builds upon a novel High-order Relation Reasoning Operator to enable indirect reasoning fortemporal action localization.
Our method ranks first in the AVA-Kineticsaction localization task of ActivityNet Challenge 2020.
arXiv Detail & Related papers (2020-06-14T18:51:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.