PanoFlow: Learning Optical Flow for Panoramic Images
- URL: http://arxiv.org/abs/2202.13388v1
- Date: Sun, 27 Feb 2022 16:03:38 GMT
- Title: PanoFlow: Learning Optical Flow for Panoramic Images
- Authors: Hao Shi, Yifan Zhou, Kailun Yang, Yaozu Ye, Xiaoting Yin, Zhe Yin, Shi
Meng, Kaiwei Wang
- Abstract summary: We put forward a novel network framework, PanoFlow, to learn optical flow for panoramic images.
We propose a Flow Distortion Augmentation (FDA) method and a Cyclic Flow Estimation (CFE) method.
Our proposed approach reduces the End-Point-Error (EPE) on the established Flow360 dataset by 26%.
- Score: 8.009873804948919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical flow estimation is a basic task in self-driving and robotics systems,
which enables to temporally interpret the traffic scene. Autonomous vehicles
clearly benefit from the ultra-wide Field of View (FoV) offered by 360-degree
panoramic sensors. However, due to the unique imaging process of panoramic
images, models designed for pinhole images do not directly generalize
satisfactorily to 360-degree panoramic images. In this paper, we put forward a
novel network framework--PanoFlow, to learn optical flow for panoramic images.
To overcome the distortions introduced by equirectangular projection in
panoramic transformation, we design a Flow Distortion Augmentation (FDA)
method. We further propose a Cyclic Flow Estimation (CFE) method by leveraging
the cyclicity of spherical images to infer 360-degree optical flow and
converting large displacement to relatively small displacement. PanoFlow is
applicable to any existing flow estimation method and benefit from the progress
of narrow-FoV flow estimation. In addition, we create and release a synthetic
panoramic dataset Flow360 based on CARLA to facilitate training and
quantitative analysis. PanoFlow achieves state-of-the-art performance. Our
proposed approach reduces the End-Point-Error (EPE) on the established Flow360
dataset by 26%. On the public OmniFlowNet dataset, PanoFlow achieves an EPE of
3.34 pixels, a 53.1% error reduction from the best published result (7.12
pixels). We also validate our method via an outdoor collection vehicle,
indicating strong potential and robustness for real-world navigation
applications. Code and dataset are publicly available at
https://github.com/MasterHow/PanoFlow.
Related papers
- FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner [70.90505084288057]
Flow-based models tend to produce a straighter sampling trajectory during the sampling process.
We introduce several techniques including a pseudo corrector and sample-aware compilation to further reduce inference time.
FlowTurbo reaches an FID of 2.12 on ImageNet with 100 (ms / img) and FID of 3.93 with 38 (ms / img)
arXiv Detail & Related papers (2024-09-26T17:59:51Z) - FlowIE: Efficient Image Enhancement via Rectified Flow [71.6345505427213]
FlowIE is a flow-based framework that estimates straight-line paths from an elementary distribution to high-quality images.
Our contributions are rigorously validated through comprehensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-06-01T17:29:29Z) - VideoFlow: Exploiting Temporal Cues for Multi-frame Optical Flow
Estimation [61.660040308290796]
VideoFlow is a novel optical flow estimation framework for videos.
We first propose a TRi-frame Optical Flow (TROF) module that estimates bi-directional optical flows for the center frame in a three-frame manner.
With the iterative flow estimation refinement, the information fused in individual TROFs can be propagated into the whole sequence via MOP.
arXiv Detail & Related papers (2023-03-15T03:14:30Z) - BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow Estimation [76.66876888943385]
Event cameras provide high temporal precision, low data rates, and high dynamic range visual perception.
We present a novel simulator, BlinkSim, for the fast generation of large-scale data for event-based optical flow.
arXiv Detail & Related papers (2023-03-14T09:03:54Z) - RealFlow: EM-based Realistic Optical Flow Dataset Generation from Videos [28.995525297929348]
RealFlow is a framework that can create large-scale optical flow datasets directly from unlabeled realistic videos.
We first estimate optical flow between a pair of video frames, and then synthesize a new image from this pair based on the predicted flow.
Our approach achieves state-of-the-art performance on two standard benchmarks compared with both supervised and unsupervised optical flow methods.
arXiv Detail & Related papers (2022-07-22T13:33:03Z) - SCFlow: Optical Flow Estimation for Spiking Camera [50.770803466875364]
Spiking camera has enormous potential in real applications, especially for motion estimation in high-speed scenes.
Optical flow estimation has achieved remarkable success in image-based and event-based vision, but % existing methods cannot be directly applied in spike stream from spiking camera.
This paper presents, SCFlow, a novel deep learning pipeline for optical flow estimation for spiking camera.
arXiv Detail & Related papers (2021-10-08T06:16:45Z) - Dense Optical Flow from Event Cameras [55.79329250951028]
We propose to incorporate feature correlation and sequential processing into dense optical flow estimation from event cameras.
Our proposed approach computes dense optical flow and reduces the end-point error by 23% on MVSEC.
arXiv Detail & Related papers (2021-08-24T07:39:08Z) - OmniFlow: Human Omnidirectional Optical Flow [0.0]
Our paper presents OmniFlow: a new synthetic omnidirectional human optical flow dataset.
Based on a rendering engine we create a naturalistic 3D indoor environment with textured rooms, characters, actions, objects, illumination and motion blur.
The simulation has as output rendered images of household activities and the corresponding forward and backward optical flow.
arXiv Detail & Related papers (2021-04-16T08:25:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.