Video Is Graph: Structured Graph Module for Video Action Recognition
- URL: http://arxiv.org/abs/2110.05904v1
- Date: Tue, 12 Oct 2021 11:27:29 GMT
- Title: Video Is Graph: Structured Graph Module for Video Action Recognition
- Authors: Rong-Chang Li, Tianyang Xu, Xiao-Jun Wu, and Josef Kittler
- Abstract summary: We transform a video sequence into a graph to obtain direct long-term dependencies among temporal frames.
In particular, SGM divides the neighbors of each node into several temporal regions so as to extract global structural information.
The reported performance and analysis demonstrate that SGM can achieve outstanding precision with less computational complexity.
- Score: 34.918667614077805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of action recognition, video clips are always treated as ordered
frames for subsequent processing. To achieve spatio-temporal perception,
existing approaches propose to embed adjacent temporal interaction in the
convolutional layer. The global semantic information can therefore be obtained
by stacking multiple local layers hierarchically. However, such global temporal
accumulation can only reflect the high-level semantics in deep layers,
neglecting the potential low-level holistic clues in shallow layers. In this
paper, we first propose to transform a video sequence into a graph to obtain
direct long-term dependencies among temporal frames. To preserve sequential
information during transformation, we devise a structured graph module (SGM),
achieving fine-grained temporal interactions throughout the entire network. In
particular, SGM divides the neighbors of each node into several temporal
regions so as to extract global structural information with diverse sequential
flows. Extensive experiments are performed on standard benchmark datasets,
i.e., Something-Something V1 & V2, Diving48, Kinetics-400, UCF101, and HMDB51.
The reported performance and analysis demonstrate that SGM can achieve
outstanding precision with less computational complexity.
Related papers
- Temporal-aware Hierarchical Mask Classification for Video Semantic
Segmentation [62.275143240798236]
Video semantic segmentation dataset has limited categories per video.
Less than 10% of queries could be matched to receive meaningful gradient updates during VSS training.
Our method achieves state-of-the-art performance on the latest challenging VSS benchmark VSPW without bells and whistles.
arXiv Detail & Related papers (2023-09-14T20:31:06Z) - Hierarchical Spatio-Temporal Representation Learning for Gait
Recognition [6.877671230651998]
Gait recognition is a biometric technique that identifies individuals by their unique walking styles.
We propose a hierarchical-temporal representation learning framework for extracting gait features from coarse to fine.
Our method outperforms the state-of-the-art while maintaining a reasonable balance between model accuracy and complexity.
arXiv Detail & Related papers (2023-07-19T09:30:00Z) - Temporal Aggregation and Propagation Graph Neural Networks for Dynamic
Representation [67.26422477327179]
Temporal graphs exhibit dynamic interactions between nodes over continuous time.
We propose a novel method of temporal graph convolution with the whole neighborhood.
Our proposed TAP-GNN outperforms existing temporal graph methods by a large margin in terms of both predictive performance and online inference latency.
arXiv Detail & Related papers (2023-04-15T08:17:18Z) - Video-SwinUNet: Spatio-temporal Deep Learning Framework for VFSS
Instance Segmentation [10.789826145990016]
This paper presents a deep learning framework for medical video segmentation.
Our framework explicitly extracts features from neighbouring frames across the temporal dimension.
It incorporates them with a temporal feature blender, which then tokenises the high-level-temporal feature to form a strong global feature encoded via a Swin Transformer.
arXiv Detail & Related papers (2023-02-22T12:09:39Z) - Self-Supervised Temporal Graph learning with Temporal and Structural Intensity Alignment [53.72873672076391]
Temporal graph learning aims to generate high-quality representations for graph-based tasks with dynamic information.
We propose a self-supervised method called S2T for temporal graph learning, which extracts both temporal and structural information.
S2T achieves at most 10.13% performance improvement compared with the state-of-the-art competitors on several datasets.
arXiv Detail & Related papers (2023-02-15T06:36:04Z) - Multi-Scale Spatial Temporal Graph Convolutional Network for
Skeleton-Based Action Recognition [13.15374205970988]
We present a multi-scale spatial graph convolution (MS-GC) module and a multi-scale temporal graph convolution (MT-GC) module.
The MS-GC and MT-GC modules decompose the corresponding local graph convolution into a set of sub-graph convolutions, forming a hierarchical residual architecture.
We propose a multi-scale spatial temporal graph convolutional network (MST-GCN), which stacks multiple blocks to learn effective motion representations for action recognition.
arXiv Detail & Related papers (2022-06-27T03:17:33Z) - Exploring Intra- and Inter-Video Relation for Surgical Semantic Scene
Segmentation [58.74791043631219]
We propose a novel framework STswinCL that explores the complementary intra- and inter-video relations to boost segmentation performance.
We extensively validate our approach on two public surgical video benchmarks, including EndoVis18 Challenge and CaDIS dataset.
Experimental results demonstrate the promising performance of our method, which consistently exceeds previous state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-29T05:52:23Z) - Improving Video Instance Segmentation via Temporal Pyramid Routing [61.10753640148878]
Video Instance (VIS) is a new and inherently multi-task problem, which aims to detect, segment and track each instance in a video sequence.
We propose a Temporal Pyramid Routing (TPR) strategy to conditionally align and conduct pixel-level aggregation from a feature pyramid pair of two adjacent frames.
Our approach is a plug-and-play module and can be easily applied to existing instance segmentation methods.
arXiv Detail & Related papers (2021-07-28T03:57:12Z) - Reconstructive Sequence-Graph Network for Video Summarization [107.0328985865372]
Exploiting the inner-shot and inter-shot dependencies is essential for key-shot based video summarization.
We propose a Reconstructive Sequence-Graph Network (RSGN) to encode the frames and shots as sequence and graph hierarchically.
A reconstructor is developed to reward the summary generator, so that the generator can be optimized in an unsupervised manner.
arXiv Detail & Related papers (2021-05-10T01:47:55Z) - Learnable Dynamic Temporal Pooling for Time Series Classification [22.931314501371805]
We present a dynamic temporal pooling (DTP) technique that reduces the temporal size of hidden representations by aggregating the features at the segment-level.
For the partition of a whole series into multiple segments, we utilize dynamic time warping (DTW) to align each time point in a temporal order with the prototypical features of the segments.
The DTP layer combined with a fully-connected layer helps to extract further discriminative features considering their temporal position within an input time series.
arXiv Detail & Related papers (2021-04-02T08:58:44Z) - Hierarchical Attention Network for Action Segmentation [45.19890687786009]
The temporal segmentation of events is an essential task and a precursor for the automatic recognition of human actions in the video.
We propose a complete end-to-end supervised learning approach that can better learn relationships between actions over time.
We evaluate our system on challenging public benchmark datasets, including MERL Shopping, 50 salads, and Georgia Tech Egocentric datasets.
arXiv Detail & Related papers (2020-05-07T02:39:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.