PCA Event-Based Otical Flow for Visual Odometry
- URL: http://arxiv.org/abs/2105.03760v1
- Date: Sat, 8 May 2021 18:30:44 GMT
- Title: PCA Event-Based Otical Flow for Visual Odometry
- Authors: Mahmoud Z. Khairallah, Fabien Bonardi, David Roussel and Samia
Bouchafa
- Abstract summary: We present a Principal Component Analysis approach to the problem of event-based optical flow estimation.
We show that the best variant of our proposed method, dedicated to the real-time context of visual odometry, is about two times faster compared to state-of-the-art implementations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the advent of neuromorphic vision sensors such as event-based cameras, a
paradigm shift is required for most computer vision algorithms. Among these
algorithms, optical flow estimation is a prime candidate for this process
considering that it is linked to a neuromorphic vision approach. Usage of
optical flow is widespread in robotics applications due to its richness and
accuracy. We present a Principal Component Analysis (PCA) approach to the
problem of event-based optical flow estimation. In this approach, we examine
different regularization methods which efficiently enhance the estimation of
the optical flow. We show that the best variant of our proposed method,
dedicated to the real-time context of visual odometry, is about two times
faster compared to state-of-the-art implementations while significantly
improves optical flow accuracy.
Related papers
- Robust Optical Flow Computation: A Higher-Order Differential Approach [0.0]
This research proposes an innovative algorithm for optical flow computation, utilizing the higher precision of second-order Taylor series approximation.
An impressive showcase of the algorithm's capabilities emerges through its performance on optical flow benchmarks such as KITTI and Middlebury.
arXiv Detail & Related papers (2024-10-12T15:20:11Z) - Rethink Predicting the Optical Flow with the Kinetics Perspective [1.7901503554839604]
Optical flow estimation is one of the fundamental tasks in low-level computer vision.
From the apparent aspect, the optical flow can be viewed as the correlation between the pixels in consecutive frames.
We propose a method combining the apparent and kinetics information from this motivation.
arXiv Detail & Related papers (2024-05-21T05:47:42Z) - Vision-Informed Flow Image Super-Resolution with Quaternion Spatial
Modeling and Dynamic Flow Convolution [49.45309818782329]
Flow image super-resolution (FISR) aims at recovering high-resolution turbulent velocity fields from low-resolution flow images.
Existing FISR methods mainly process the flow images in natural image patterns.
We propose the first flow visual property-informed FISR algorithm.
arXiv Detail & Related papers (2024-01-29T06:48:16Z) - Skin the sheep not only once: Reusing Various Depth Datasets to Drive
the Learning of Optical Flow [25.23550076996421]
We propose to leverage the geometric connection between optical flow estimation and stereo matching.
We turn the monocular depth datasets into stereo ones via virtual disparity.
We also introduce virtual camera motion into stereo data to produce additional flows along the vertical direction.
arXiv Detail & Related papers (2023-10-03T06:56:07Z) - Sensor-Guided Optical Flow [53.295332513139925]
This paper proposes a framework to guide an optical flow network with external cues to achieve superior accuracy on known or unseen domains.
We show how these can be obtained by combining depth measurements from active sensors with geometry and hand-crafted optical flow algorithms.
arXiv Detail & Related papers (2021-09-30T17:59:57Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - Self-Supervised Approach for Facial Movement Based Optical Flow [8.19666118455293]
We generate optical flow ground truth for face images using facial key-points.
We train the FlowNetS architecture to test its performance on the generated dataset.
The optical flow obtained using this work has promising applications in facial expression analysis.
arXiv Detail & Related papers (2021-05-04T02:38:11Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - STaRFlow: A SpatioTemporal Recurrent Cell for Lightweight Multi-Frame
Optical Flow Estimation [64.99259320624148]
We present a new lightweight CNN-based algorithm for multi-frame optical flow estimation.
The resulting STaRFlow algorithm gives state-of-the-art performances on MPI Sintel and Kitti2015.
arXiv Detail & Related papers (2020-07-10T17:01:34Z) - Optical Flow Estimation in the Deep Learning Age [27.477810324117016]
We review the developments from early work to the current state of CNNs for optical flow estimation.
We discuss some of their technical details and compare them to recapitulate which technical contribution led to the most significant accuracy improvements.
We provide an overview of the various optical flow approaches introduced in the deep learning age.
arXiv Detail & Related papers (2020-04-06T17:45:43Z) - Joint Unsupervised Learning of Optical Flow and Egomotion with Bi-Level
Optimization [59.9673626329892]
We exploit the global relationship between optical flow and camera motion using epipolar geometry.
We use implicit differentiation to enable back-propagation through the lower-level geometric optimization layer independent of its implementation.
arXiv Detail & Related papers (2020-02-26T22:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.