MPI-Flow: Learning Realistic Optical Flow with Multiplane Images
- URL: http://arxiv.org/abs/2309.06714v1
- Date: Wed, 13 Sep 2023 04:31:00 GMT
- Title: MPI-Flow: Learning Realistic Optical Flow with Multiplane Images
- Authors: Yingping Liang, Jiaming Liu, Debing Zhang, Ying Fu
- Abstract summary: We investigate generating realistic optical flow datasets from real-world images.
To generate highly realistic new images, we construct a layered depth representation, known as multiplane images (MPI), from single-view images.
To ensure the realism of motion, we present an independent object motion module that can separate the camera and dynamic object motion in MPI.
- Score: 18.310665144874775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The accuracy of learning-based optical flow estimation models heavily relies
on the realism of the training datasets. Current approaches for generating such
datasets either employ synthetic data or generate images with limited realism.
However, the domain gap of these data with real-world scenes constrains the
generalization of the trained model to real-world applications. To address this
issue, we investigate generating realistic optical flow datasets from
real-world images. Firstly, to generate highly realistic new images, we
construct a layered depth representation, known as multiplane images (MPI),
from single-view images. This allows us to generate novel view images that are
highly realistic. To generate optical flow maps that correspond accurately to
the new image, we calculate the optical flows of each plane using the camera
matrix and plane depths. We then project these layered optical flows into the
output optical flow map with volume rendering. Secondly, to ensure the realism
of motion, we present an independent object motion module that can separate the
camera and dynamic object motion in MPI. This module addresses the deficiency
in MPI-based single-view methods, where optical flow is generated only by
camera motion and does not account for any object movement. We additionally
devise a depth-aware inpainting module to merge new images with dynamic objects
and address unnatural motion occlusions. We show the superior performance of
our method through extensive experiments on real-world datasets. Moreover, our
approach achieves state-of-the-art performance in both unsupervised and
supervised training of learning-based models. The code will be made publicly
available at: \url{https://github.com/Sharpiless/MPI-Flow}.
Related papers
- Improving Unsupervised Video Object Segmentation via Fake Flow Generation [20.89278343723177]
We propose a novel data generation method that simulates fake optical flows from single images.
Inspired by the observation that optical flow maps are highly dependent on depth maps, we generate fake optical flows by refining and augmenting the estimated depth maps of each image.
arXiv Detail & Related papers (2024-07-16T13:32:50Z) - SynFog: A Photo-realistic Synthetic Fog Dataset based on End-to-end Imaging Simulation for Advancing Real-World Defogging in Autonomous Driving [48.27575423606407]
We introduce an end-to-end simulation pipeline designed to generate photo-realistic foggy images.
We present a new synthetic fog dataset named SynFog, which features both sky light and active lighting conditions.
Experimental results demonstrate that models trained on SynFog exhibit superior performance in visual perception and detection accuracy.
arXiv Detail & Related papers (2024-03-25T18:32:41Z) - RealFlow: EM-based Realistic Optical Flow Dataset Generation from Videos [28.995525297929348]
RealFlow is a framework that can create large-scale optical flow datasets directly from unlabeled realistic videos.
We first estimate optical flow between a pair of video frames, and then synthesize a new image from this pair based on the predicted flow.
Our approach achieves state-of-the-art performance on two standard benchmarks compared with both supervised and unsupervised optical flow methods.
arXiv Detail & Related papers (2022-07-22T13:33:03Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Learning optical flow from still images [53.295332513139925]
We introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture.
We virtually move the camera in the reconstructed environment with known motion vectors and rotation angles.
When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data.
arXiv Detail & Related papers (2021-04-08T17:59:58Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection
Consistency [114.02182755620784]
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
Our framework is shown to outperform the state-of-the-art depth and motion estimation methods.
arXiv Detail & Related papers (2021-02-04T14:26:42Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.