Flow Dynamics Correction for Action Recognition
- URL: http://arxiv.org/abs/2310.10059v2
- Date: Sat, 16 Dec 2023 04:37:10 GMT
- Title: Flow Dynamics Correction for Action Recognition
- Authors: Lei Wang and Piotr Koniusz
- Abstract summary: We show that existing action recognition models which rely on optical flow are able to get performance boosted with our corrected optical flow.
We integrate our corrected flow dynamics into popular models through a simple step by selecting only the best performing optical flow features.
- Score: 43.95003560364798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various research studies indicate that action recognition performance highly
depends on the types of motions being extracted and how accurate the human
actions are represented. In this paper, we investigate different optical flow,
and features extracted from these optical flow that capturing both short-term
and long-term motion dynamics. We perform power normalization on the magnitude
component of optical flow for flow dynamics correction to boost subtle or
dampen sudden motions. We show that existing action recognition models which
rely on optical flow are able to get performance boosted with our corrected
optical flow. To further improve performance, we integrate our corrected flow
dynamics into popular models through a simple hallucination step by selecting
only the best performing optical flow features, and we show that by
'translating' the CNN feature maps into these optical flow features with
different scales of motions leads to the new state-of-the-art performance on
several benchmarks including HMDB-51, YUP++, fine-grained action recognition on
MPII Cooking Activities, and large-scale Charades.
Related papers
- Generalizable Implicit Motion Modeling for Video Frame Interpolation [51.966062283735596]
Motion is critical in flow-based Video Frame Interpolation (VFI)
We introduce General Implicit Motion Modeling (IMM), a novel and effective approach to motion modeling VFI.
Our GIMM can be easily integrated with existing flow-based VFI works by supplying accurately modeled motion.
arXiv Detail & Related papers (2024-07-11T17:13:15Z) - Vision-Informed Flow Image Super-Resolution with Quaternion Spatial
Modeling and Dynamic Flow Convolution [49.45309818782329]
Flow image super-resolution (FISR) aims at recovering high-resolution turbulent velocity fields from low-resolution flow images.
Existing FISR methods mainly process the flow images in natural image patterns.
We propose the first flow visual property-informed FISR algorithm.
arXiv Detail & Related papers (2024-01-29T06:48:16Z) - Self-Supervised Motion Magnification by Backpropagating Through Optical
Flow [16.80592879244362]
This paper presents a self-supervised method for magnifying subtle motions in video.
We manipulate the video such that its new optical flow is scaled by the desired amount.
We propose a loss function that estimates the optical flow of the generated video and penalizes how far if deviates from the given magnification factor.
arXiv Detail & Related papers (2023-11-28T18:59:51Z) - EM-driven unsupervised learning for efficient motion segmentation [3.5232234532568376]
This paper presents a CNN-based fully unsupervised method for motion segmentation from optical flow.
We use the Expectation-Maximization (EM) framework to leverage the loss function and the training procedure of our motion segmentation neural network.
Our method outperforms comparable unsupervised methods and is very efficient.
arXiv Detail & Related papers (2022-01-06T14:35:45Z) - Sensor-Guided Optical Flow [53.295332513139925]
This paper proposes a framework to guide an optical flow network with external cues to achieve superior accuracy on known or unseen domains.
We show how these can be obtained by combining depth measurements from active sensors with geometry and hand-crafted optical flow algorithms.
arXiv Detail & Related papers (2021-09-30T17:59:57Z) - PCA Event-Based Otical Flow for Visual Odometry [0.0]
We present a Principal Component Analysis approach to the problem of event-based optical flow estimation.
We show that the best variant of our proposed method, dedicated to the real-time context of visual odometry, is about two times faster compared to state-of-the-art implementations.
arXiv Detail & Related papers (2021-05-08T18:30:44Z) - Unsupervised Motion Representation Enhanced Network for Action
Recognition [4.42249337449125]
Motion representation between consecutive frames has proven to have great promotion to video understanding.
TV-L1 method, an effective optical flow solver, is time-consuming and expensive in storage for caching the extracted optical flow.
We propose UF-TSN, a novel end-to-end action recognition approach enhanced with an embedded lightweight unsupervised optical flow estimator.
arXiv Detail & Related papers (2021-03-05T04:14:32Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - PAN: Towards Fast Action Recognition via Learning Persistence of
Appearance [60.75488333935592]
Most state-of-the-art methods heavily rely on dense optical flow as motion representation.
In this paper, we shed light on fast action recognition by lifting the reliance on optical flow.
We design a novel motion cue called Persistence of Appearance (PA)
In contrast to optical flow, our PA focuses more on distilling the motion information at boundaries.
arXiv Detail & Related papers (2020-08-08T07:09:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.