CTM: Collaborative Temporal Modeling for Action Recognition
- URL: http://arxiv.org/abs/2002.03152v1
- Date: Sat, 8 Feb 2020 12:14:02 GMT
- Title: CTM: Collaborative Temporal Modeling for Action Recognition
- Authors: Qian Liu, Tao Wang, Jie Liu, Yang Guan, Qi Bu, Longfei Yang
- Abstract summary: We propose a Collaborative Temporal Modeling (CTM) block to learn temporal information for action recognition.
CTM includes two collaborative paths: a spatial-aware temporal modeling path, and a spatial-unaware temporal modeling path.
Experiments on several popular action recognition datasets demonstrate that CTM blocks bring the performance improvements on 2D CNN baselines.
- Score: 11.467061749436356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of digital multimedia, video understanding has
become an important field. For action recognition, temporal dimension plays an
important role, and this is quite different from image recognition. In order to
learn powerful feature of videos, we propose a Collaborative Temporal Modeling
(CTM) block (Figure 1) to learn temporal information for action recognition.
Besides a parameter-free identity shortcut, as a separate temporal modeling
block, CTM includes two collaborative paths: a spatial-aware temporal modeling
path, which we propose the Temporal-Channel Convolution Module (TCCM) with
unshared parameters for each spatial position (H*W) to build, and a
spatial-unaware temporal modeling path. CTM blocks can seamlessly be inserted
into many popular networks to generate CTM Networks and bring the capability of
learning temporal information to 2D CNN backbone networks, which only capture
spatial information. Experiments on several popular action recognition datasets
demonstrate that CTM blocks bring the performance improvements on 2D CNN
baselines, and our method achieves the competitive results against the
state-of-the-art methods. Code will be made publicly available.
Related papers
- Deeply-Coupled Convolution-Transformer with Spatial-temporal
Complementary Learning for Video-based Person Re-identification [91.56939957189505]
We propose a novel spatial-temporal complementary learning framework named Deeply-Coupled Convolution-Transformer (DCCT) for high-performance video-based person Re-ID.
Our framework could attain better performances than most state-of-the-art methods.
arXiv Detail & Related papers (2023-04-27T12:16:44Z) - Revisiting Temporal Modeling for CLIP-based Image-to-Video Knowledge
Transferring [82.84513669453744]
Image-text pretrained models, e.g., CLIP, have shown impressive general multi-modal knowledge learned from large-scale image-text data pairs.
We revisit temporal modeling in the context of image-to-video knowledge transferring.
We present a simple and effective temporal modeling mechanism extending CLIP model to diverse video tasks.
arXiv Detail & Related papers (2023-01-26T14:12:02Z) - Skeleton-based Action Recognition via Temporal-Channel Aggregation [5.620303498964992]
We propose a Temporal-Channel Aggregation Graph Conal Networks (TCA-CN) to learn spatial and temporal topologies.
In addition, we extract multi-scale skeletal temporal modeling and fuse them with priori skeletal knowledge with an attention mechanism.
arXiv Detail & Related papers (2022-05-31T16:28:30Z) - Slow-Fast Visual Tempo Learning for Video-based Action Recognition [78.3820439082979]
Action visual tempo characterizes the dynamics and the temporal scale of an action.
Previous methods capture the visual tempo either by sampling raw videos with multiple rates, or by hierarchically sampling backbone features.
We propose a Temporal Correlation Module (TCM) to extract action visual tempo from low-level backbone features at single-layer remarkably.
arXiv Detail & Related papers (2022-02-24T14:20:04Z) - Spatiotemporal Inconsistency Learning for DeepFake Video Detection [51.747219106855624]
We present a novel temporal modeling paradigm in TIM by exploiting the temporal difference over adjacent frames along with both horizontal and vertical directions.
And the ISM simultaneously utilizes the spatial information from SIM and temporal information from TIM to establish a more comprehensive spatial-temporal representation.
arXiv Detail & Related papers (2021-09-04T13:05:37Z) - Spatial-Temporal Correlation and Topology Learning for Person
Re-Identification in Videos [78.45050529204701]
We propose a novel framework to pursue discriminative and robust representation by modeling cross-scale spatial-temporal correlation.
CTL utilizes a CNN backbone and a key-points estimator to extract semantic local features from human body.
It explores a context-reinforced topology to construct multi-scale graphs by considering both global contextual information and physical connections of human body.
arXiv Detail & Related papers (2021-04-15T14:32:12Z) - Learning Comprehensive Motion Representation for Action Recognition [124.65403098534266]
2D CNN-based methods are efficient but may yield redundant features due to applying the same 2D convolution kernel to each frame.
Recent efforts attempt to capture motion information by establishing inter-frame connections while still suffering the limited temporal receptive field or high latency.
We propose a Channel-wise Motion Enhancement (CME) module to adaptively emphasize the channels related to dynamic information with a channel-wise gate vector.
We also propose a Spatial-wise Motion Enhancement (SME) module to focus on the regions with the critical target in motion, according to the point-to-point similarity between adjacent feature maps.
arXiv Detail & Related papers (2021-03-23T03:06:26Z) - Comparison of Spatiotemporal Networks for Learning Video Related Tasks [0.0]
Many methods for learning from sequences involve temporally processing 2D CNN features from the individual frames or directly utilizing 3D convolutions within high-performing 2D CNN architectures.
This work constructs an MNIST-based video dataset with parameters controlling relevant facets of common video-related tasks: classification, ordering, and speed estimation.
Models trained on this dataset are shown to differ in key ways depending on the task and their use of 2D convolutions, 3D convolutions, or convolutional LSTMs.
arXiv Detail & Related papers (2020-09-15T19:57:50Z) - STH: Spatio-Temporal Hybrid Convolution for Efficient Action Recognition [39.58542259261567]
We present a novel S-Temporal Hybrid Network (STH) which simultaneously encodes spatial and temporal video information with a small parameter.
Such a design enables efficient-temporal modeling and maintains a small model scale.
STH enjoys performance superiority over 3D CNNs while maintaining an even smaller parameter cost than 2D CNNs.
arXiv Detail & Related papers (2020-03-18T04:46:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.