Close-up View synthesis by Interpolating Optical Flow
- URL: http://arxiv.org/abs/2307.05913v1
- Date: Wed, 12 Jul 2023 04:40:00 GMT
- Title: Close-up View synthesis by Interpolating Optical Flow
- Authors: Xinyi Bai, Ze Wang, Lu Yang, Hong Cheng
- Abstract summary: The virtual viewpoint is perceived as a new technique in virtual navigation, as yet not supported due to the lack of depth information and obscure camera parameters.
We develop a bidirectional optical flow method to obtain any virtual viewpoint by proportional of optical flow.
With the ingenious application of the optical-flow-value, we achieve clear and visual-fidelity magnified results through lens stretching in any corner.
- Score: 17.800430382213428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The virtual viewpoint is perceived as a new technique in virtual navigation,
as yet not supported due to the lack of depth information and obscure camera
parameters. In this paper, a method for achieving close-up virtual view is
proposed and it only uses optical flow to build parallax effects to realize
pseudo 3D projection without using depth sensor. We develop a bidirectional
optical flow method to obtain any virtual viewpoint by proportional
interpolation of optical flow. Moreover, with the ingenious application of the
optical-flow-value, we achieve clear and visual-fidelity magnified results
through lens stretching in any corner, which overcomes the visual distortion
and image blur through viewpoint magnification and transition in Google Street
View system.
Related papers
- Stereo-Depth Fusion through Virtual Pattern Projection [37.519762078762575]
This paper presents a novel general-purpose stereo and depth data fusion paradigm.
It mimics the active stereo principle by replacing the unreliable physical pattern projector with a depth sensor.
It works by projecting virtual patterns consistent with the scene geometry onto the left and right images acquired by a conventional stereo camera.
arXiv Detail & Related papers (2024-06-06T17:59:58Z) - UFD-PRiME: Unsupervised Joint Learning of Optical Flow and Stereo Depth
through Pixel-Level Rigid Motion Estimation [4.445751695675388]
Both optical flow and stereo disparities are image matches and can therefore benefit from joint training.
We design a first network that estimates flow and disparity jointly and is trained without supervision.
A second network, trained with optical flow from the first as pseudo-labels, takes disparities from the first network, estimates 3D rigid motion at every pixel, and reconstructs optical flow again.
arXiv Detail & Related papers (2023-10-07T07:08:25Z) - Skin the sheep not only once: Reusing Various Depth Datasets to Drive
the Learning of Optical Flow [25.23550076996421]
We propose to leverage the geometric connection between optical flow estimation and stereo matching.
We turn the monocular depth datasets into stereo ones via virtual disparity.
We also introduce virtual camera motion into stereo data to produce additional flows along the vertical direction.
arXiv Detail & Related papers (2023-10-03T06:56:07Z) - Optimization-Based Eye Tracking using Deflectometric Information [14.010352335803873]
State-of-the-art eye tracking methods are either-based and track reflections of sparse point light sources, or image-based and exploit 2D features of the acquired eye image.
We develop a differentiable pipeline based on PyTorch3D that simulates a virtual eye under screen illumination.
In general, our method does not require a specific pattern rendering and can work with ordinary video frames of the main VR/AR/MR screen itself.
arXiv Detail & Related papers (2023-03-09T02:41:13Z) - Dimensions of Motion: Learning to Predict a Subspace of Optical Flow
from a Single Image [50.9686256513627]
We introduce the problem of predicting, from a single video frame, a low-dimensional subspace of optical flow which includes the actual instantaneous optical flow.
We show how several natural scene assumptions allow us to identify an appropriate flow subspace via a set of basis flow fields parameterized by disparity.
This provides a new approach to learning these tasks in an unsupervised fashion using monocular input video without requiring camera intrinsics or poses.
arXiv Detail & Related papers (2021-12-02T18:52:54Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Learning optical flow from still images [53.295332513139925]
We introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture.
We virtually move the camera in the reconstructed environment with known motion vectors and rotation angles.
When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data.
arXiv Detail & Related papers (2021-04-08T17:59:58Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z) - Joint Unsupervised Learning of Optical Flow and Egomotion with Bi-Level
Optimization [59.9673626329892]
We exploit the global relationship between optical flow and camera motion using epipolar geometry.
We use implicit differentiation to enable back-propagation through the lower-level geometric optimization layer independent of its implementation.
arXiv Detail & Related papers (2020-02-26T22:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.