Optical Flow for Autonomous Driving: Applications, Challenges and
Improvements
- URL: http://arxiv.org/abs/2301.04422v1
- Date: Wed, 11 Jan 2023 12:01:42 GMT
- Title: Optical Flow for Autonomous Driving: Applications, Challenges and
Improvements
- Authors: Shihao Shen, Louis Kerofsky and Senthil Yogamani
- Abstract summary: We propose and evaluate training strategies to improve a learning-based optical flow algorithm.
While trained with synthetic data, the model demonstrates strong capabilities to generalize to real world fisheye data.
We propose a novel, generic semi-supervised framework that significantly boosts performances of existing methods in low light.
- Score: 0.9023847175654602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical flow estimation is a well-studied topic for automated driving
applications. Many outstanding optical flow estimation methods have been
proposed, but they become erroneous when tested in challenging scenarios that
are commonly encountered. Despite the increasing use of fisheye cameras for
near-field sensing in automated driving, there is very limited literature on
optical flow estimation with strong lens distortion. Thus we propose and
evaluate training strategies to improve a learning-based optical flow algorithm
by leveraging the only existing fisheye dataset with optical flow ground truth.
While trained with synthetic data, the model demonstrates strong capabilities
to generalize to real world fisheye data. The other challenge neglected by
existing state-of-the-art algorithms is low light. We propose a novel, generic
semi-supervised framework that significantly boosts performances of existing
methods in such conditions. To the best of our knowledge, this is the first
approach that explicitly handles optical flow estimation in low light.
Related papers
- Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation [34.529280562470746]
We introduce a novel self-supervised loss combining the Contrast Maximization framework with a non-linear motion prior in the form of pixel-level trajectories.
Their effectiveness is demonstrated in two scenarios: In dense continuous-time motion estimation, our method improves the zero-shot performance of a synthetically trained model by 29%.
arXiv Detail & Related papers (2024-07-15T15:18:28Z) - ADFactory: An Effective Framework for Generalizing Optical Flow with
Nerf [0.4532517021515834]
We introduce a novel optical flow training framework: automatic data factory (ADF)
ADF only requires RGB images as input to effectively train the optical flow network on the target data domain.
We use advanced Nerf technology to reconstruct scenes from photo groups collected by a monocular camera.
We screen the generated labels from multiple aspects, such as optical flow matching accuracy, radiation field confidence, and depth consistency.
arXiv Detail & Related papers (2023-11-07T05:21:45Z) - Skin the sheep not only once: Reusing Various Depth Datasets to Drive
the Learning of Optical Flow [25.23550076996421]
We propose to leverage the geometric connection between optical flow estimation and stereo matching.
We turn the monocular depth datasets into stereo ones via virtual disparity.
We also introduce virtual camera motion into stereo data to produce additional flows along the vertical direction.
arXiv Detail & Related papers (2023-10-03T06:56:07Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Sensor-Guided Optical Flow [53.295332513139925]
This paper proposes a framework to guide an optical flow network with external cues to achieve superior accuracy on known or unseen domains.
We show how these can be obtained by combining depth measurements from active sensors with geometry and hand-crafted optical flow algorithms.
arXiv Detail & Related papers (2021-09-30T17:59:57Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - PCA Event-Based Otical Flow for Visual Odometry [0.0]
We present a Principal Component Analysis approach to the problem of event-based optical flow estimation.
We show that the best variant of our proposed method, dedicated to the real-time context of visual odometry, is about two times faster compared to state-of-the-art implementations.
arXiv Detail & Related papers (2021-05-08T18:30:44Z) - Learning optical flow from still images [53.295332513139925]
We introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture.
We virtually move the camera in the reconstructed environment with known motion vectors and rotation angles.
When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data.
arXiv Detail & Related papers (2021-04-08T17:59:58Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy [0.0]
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
arXiv Detail & Related papers (2020-09-17T13:30:05Z) - Joint Unsupervised Learning of Optical Flow and Egomotion with Bi-Level
Optimization [59.9673626329892]
We exploit the global relationship between optical flow and camera motion using epipolar geometry.
We use implicit differentiation to enable back-propagation through the lower-level geometric optimization layer independent of its implementation.
arXiv Detail & Related papers (2020-02-26T22:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.