Light Field View Synthesis via Aperture Disparity and Warping Confidence
Map
- URL: http://arxiv.org/abs/2009.02978v2
- Date: Sat, 3 Apr 2021 05:00:06 GMT
- Title: Light Field View Synthesis via Aperture Disparity and Warping Confidence
Map
- Authors: Nan Meng, Kai Li, Jianzhuang Liu, Edmund Y. Lam
- Abstract summary: This paper presents a learning-based approach to synthesize the view from an arbitrary camera position given a sparse set of images.
A key challenge for this novel view synthesis arises from the reconstruction process, when the views from different input images may not be consistent due to obstruction in the light path.
- Score: 47.046276641506786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a learning-based approach to synthesize the view from an
arbitrary camera position given a sparse set of images. A key challenge for
this novel view synthesis arises from the reconstruction process, when the
views from different input images may not be consistent due to obstruction in
the light path. We overcome this by jointly modeling the epipolar property and
occlusion in designing a convolutional neural network. We start by defining and
computing the aperture disparity map, which approximates the parallax and
measures the pixel-wise shift between two views. While this relates to
free-space rendering and can fail near the object boundaries, we further
develop a warping confidence map to address pixel occlusion in these
challenging regions. The proposed method is evaluated on diverse real-world and
synthetic light field scenes, and it shows better performance over several
state-of-the-art techniques.
Related papers
- CMC: Few-shot Novel View Synthesis via Cross-view Multiplane Consistency [18.101763989542828]
We propose a simple yet effective method that explicitly builds depth-aware consistency across input views.
Our key insight is that by forcing the same spatial points to be sampled repeatedly in different input views, we are able to strengthen the interactions between views.
Although simple, extensive experiments demonstrate that our proposed method can achieve better synthesis quality over state-of-the-art methods.
arXiv Detail & Related papers (2024-02-26T09:04:04Z) - Neural Spline Fields for Burst Image Fusion and Layer Separation [40.9442467471977]
We propose a versatile intermediate representation: a two-layer alpha-composited image plus flow model constructed with neural spline fields.
Our method is able to jointly fuse a burst image capture into one high-resolution reconstruction and decompose it into transmission and obstruction layers.
We find that, with no post-processing steps or learned priors, our generalizable model is able to outperform existing dedicated single-image and multi-view obstruction removal approaches.
arXiv Detail & Related papers (2023-12-21T18:54:19Z) - Fine Dense Alignment of Image Bursts through Camera Pose and Depth
Estimation [45.11207941777178]
This paper introduces a novel approach to the fine alignment of images in a burst captured by a handheld camera.
The proposed algorithm establishes dense correspondences by optimizing both the camera motion and surface depth and orientation at every pixel.
arXiv Detail & Related papers (2023-12-08T17:22:04Z) - Layered Rendering Diffusion Model for Zero-Shot Guided Image Synthesis [60.260724486834164]
This paper introduces innovative solutions to enhance spatial controllability in diffusion models reliant on text queries.
We present two key innovations: Vision Guidance and the Layered Rendering Diffusion framework.
We apply our method to three practical applications: bounding box-to-image, semantic mask-to-image and image editing.
arXiv Detail & Related papers (2023-11-30T10:36:19Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - Content-aware Warping for View Synthesis [110.54435867693203]
We propose content-aware warping, which adaptively learns the weights for pixels of a relatively large neighborhood from their contextual information via a lightweight neural network.
Based on this learnable warping module, we propose a new end-to-end learning-based framework for novel view synthesis from two source views.
Experimental results on structured light field datasets with wide baselines and unstructured multi-view datasets show that the proposed method significantly outperforms state-of-the-art methods both quantitatively and visually.
arXiv Detail & Related papers (2022-01-22T11:35:05Z) - Bridging the Visual Gap: Wide-Range Image Blending [16.464837892640812]
We introduce an effective deep-learning model to realize wide-range image blending.
We experimentally demonstrate that our proposed method is able to produce visually appealing results.
arXiv Detail & Related papers (2021-03-28T15:07:45Z) - Semantic View Synthesis [56.47999473206778]
We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input.
First, we focus on synthesizing the color and depth of the visible surface of the 3D scene.
We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process.
arXiv Detail & Related papers (2020-08-24T17:59:46Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.