ReynoldsFlow: Exquisite Flow Estimation via Reynolds Transport Theorem
- URL: http://arxiv.org/abs/2503.04500v2
- Date: Sun, 09 Mar 2025 17:47:41 GMT
- Title: ReynoldsFlow: Exquisite Flow Estimation via Reynolds Transport Theorem
- Authors: Yu-Hsi Chen, Chin-Tien Wu,
- Abstract summary: Reynolds flow is a training-free flow estimation inspired by the Reynolds transport theorem.<n>We introduce an RGB-encoded representation of Reynolds flow designed to improve flow visualization and feature enhancement for neural networks.
- Score: 0.02932486408310998
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Optical flow is a fundamental technique for motion estimation, widely applied in video stabilization, interpolation, and object tracking. Traditional optical flow estimation methods rely on restrictive assumptions like brightness constancy and slow motion constraints. Recent deep learning-based flow estimations require extensive training on large domain-specific datasets, making them computationally demanding. Also, artificial intelligence (AI) advances have enabled deep learning models to take advantage of optical flow as an important feature for object tracking and motion analysis. Since optical flow is commonly encoded in HSV for visualization, its conversion to RGB for neural network processing is nonlinear and may introduce perceptual distortions. These transformations amplify the sensitivity to estimation errors, potentially affecting the predictive accuracy of the networks. To address these challenges that are influential to the performance of downstream network models, we propose Reynolds flow, a novel training-free flow estimation inspired by the Reynolds transport theorem, offering a principled approach to modeling complex motion dynamics. In addition to conventional HSV-based visualization of Reynolds flow, we also introduce an RGB-encoded representation of Reynolds flow designed to improve flow visualization and feature enhancement for neural networks. We evaluated the effectiveness of Reynolds flow in video-based tasks. Experimental results on three benchmarks, tiny object detection on UAVDB, infrared object detection on Anti-UAV, and pose estimation on GolfDB, demonstrate that networks trained with RGB-encoded Reynolds flow achieve SOTA performance, exhibiting improved robustness and efficiency across all tasks.
Related papers
- FlowIE: Efficient Image Enhancement via Rectified Flow [71.6345505427213]
FlowIE is a flow-based framework that estimates straight-line paths from an elementary distribution to high-quality images.
Our contributions are rigorously validated through comprehensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-06-01T17:29:29Z) - Physics-Guided Neural Networks for Intraventricular Vector Flow Mapping [1.498019339784467]
We propose novel alternatives to the traditional iVFM optimization scheme by utilizing physics-informed neural networks (PINNs) and a physics-guided nnU-Net-based supervised approach.
Both approaches demonstrate comparable reconstruction performance to the original iVFM algorithm.
The study also suggests potential applications of PINNs in ultrafast color Doppler imaging and the incorporation of fluid dynamics equations to derive biomarkers for cardiovascular diseases based on blood flow.
arXiv Detail & Related papers (2024-03-19T17:35:17Z) - Vision-Informed Flow Image Super-Resolution with Quaternion Spatial
Modeling and Dynamic Flow Convolution [49.45309818782329]
Flow image super-resolution (FISR) aims at recovering high-resolution turbulent velocity fields from low-resolution flow images.
Existing FISR methods mainly process the flow images in natural image patterns.
We propose the first flow visual property-informed FISR algorithm.
arXiv Detail & Related papers (2024-01-29T06:48:16Z) - Forward Flow for Novel View Synthesis of Dynamic Scenes [97.97012116793964]
We propose a neural radiance field (NeRF) approach for novel view synthesis of dynamic scenes using forward warping.
Our method outperforms existing methods in both novel view rendering and motion modeling.
arXiv Detail & Related papers (2023-09-29T16:51:06Z) - GAFlow: Incorporating Gaussian Attention into Optical Flow [62.646389181507764]
We push Gaussian Attention (GA) into the optical flow models to accentuate local properties during representation learning.
We introduce a novel Gaussian-Constrained Layer (GCL) which can be easily plugged into existing Transformer blocks.
For reliable motion analysis, we provide a new Gaussian-Guided Attention Module (GGAM)
arXiv Detail & Related papers (2023-09-28T07:46:01Z) - Towards Anytime Optical Flow Estimation with Event Cameras [35.685866753715416]
Event cameras are capable of responding to log-brightness changes in microseconds.
Existing datasets collected via event cameras provide limited frame rate optical flow ground truth.
We propose EVA-Flow, an EVent-based Anytime Flow estimation network to produce high-frame-rate event optical flow.
arXiv Detail & Related papers (2023-07-11T06:15:12Z) - TransFlow: Transformer as Flow Learner [22.727953339383344]
We propose TransFlow, a pure transformer architecture for optical flow estimation.
It provides more accurate correlation and trustworthy matching in flow estimation.
It recovers more compromised information in flow estimation through long-range temporal association in dynamic scenes.
arXiv Detail & Related papers (2023-04-23T03:11:23Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Sensor-Guided Optical Flow [53.295332513139925]
This paper proposes a framework to guide an optical flow network with external cues to achieve superior accuracy on known or unseen domains.
We show how these can be obtained by combining depth measurements from active sensors with geometry and hand-crafted optical flow algorithms.
arXiv Detail & Related papers (2021-09-30T17:59:57Z) - Towards high-accuracy deep learning inference of compressible turbulent
flows over aerofoils [26.432914066756897]
The present study investigates the accurate inference of Navier-Stokes solutions for compressible flow over aerofoils in two dimensions with a deep neural network.
Our approach yields networks that learn to generate precise flow fields for varying body-fitted, structured grids.
The proposed deep learning method significantly speeds up the predictions of flow fields and shows promise for enabling fast aerodynamic designs.
arXiv Detail & Related papers (2021-09-05T23:23:39Z) - Unsupervised Motion Representation Enhanced Network for Action
Recognition [4.42249337449125]
Motion representation between consecutive frames has proven to have great promotion to video understanding.
TV-L1 method, an effective optical flow solver, is time-consuming and expensive in storage for caching the extracted optical flow.
We propose UF-TSN, a novel end-to-end action recognition approach enhanced with an embedded lightweight unsupervised optical flow estimator.
arXiv Detail & Related papers (2021-03-05T04:14:32Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - How Do Neural Networks Estimate Optical Flow? A Neuropsychology-Inspired
Study [0.0]
In this article, we investigate how deep neural networks estimate optical flow.
For our investigation, we focus on FlowNetS, as it is the prototype of an encoder-decoder neural network for optical flow estimation.
We use a filter identification method that has played a major role in uncovering the motion filters present in animal brains in neuropsychological research.
arXiv Detail & Related papers (2020-04-20T14:08:28Z) - Volterra Neural Networks (VNNs) [24.12314339259243]
We propose a Volterra filter-inspired Network architecture to reduce the complexity of Convolutional Neural Networks.
We show an efficient parallel implementation of this Volterra Neural Network (VNN) along with its remarkable performance.
The proposed approach is evaluated on UCF-101 and HMDB-51 datasets for action recognition, and is shown to outperform state of the art CNN approaches.
arXiv Detail & Related papers (2019-10-21T19:22:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.