One Shot 3D Photography
- URL: http://arxiv.org/abs/2008.12298v2
- Date: Tue, 1 Sep 2020 14:52:55 GMT
- Title: One Shot 3D Photography
- Authors: Johannes Kopf, Kevin Matzen, Suhib Alsisan, Ocean Quigley, Francis Ge,
Yangming Chong, Josh Patterson, Jan-Michael Frahm, Shu Wu, Matthew Yu,
Peizhao Zhang, Zijian He, Peter Vajda, Ayush Saraf, Michael Cohen
- Abstract summary: We present an end-to-end system for creating and viewing 3D photos.
Our 3D photos are captured in a single shot and processed directly on a mobile device.
- Score: 40.83662583097118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D photography is a new medium that allows viewers to more fully experience a
captured moment. In this work, we refer to a 3D photo as one that displays
parallax induced by moving the viewpoint (as opposed to a stereo pair with a
fixed viewpoint). 3D photos are static in time, like traditional photos, but
are displayed with interactive parallax on mobile or desktop screens, as well
as on Virtual Reality devices, where viewing it also includes stereo. We
present an end-to-end system for creating and viewing 3D photos, and the
algorithmic and design choices therein. Our 3D photos are captured in a single
shot and processed directly on a mobile device. The method starts by estimating
depth from the 2D input image using a new monocular depth estimation network
that is optimized for mobile devices. It performs competitively to the
state-of-the-art, but has lower latency and peak memory consumption and uses an
order of magnitude fewer parameters. The resulting depth is lifted to a layered
depth image, and new geometry is synthesized in parallax regions. We synthesize
color texture and structures in the parallax regions as well, using an
inpainting network, also optimized for mobile devices, on the LDI directly.
Finally, we convert the result into a mesh-based representation that can be
efficiently transmitted and rendered even on low-end devices and over poor
network connections. Altogether, the processing takes just a few seconds on a
mobile device, and the result can be instantly viewed and shared. We perform
extensive quantitative evaluation to validate our system and compare its new
components against the current state-of-the-art.
Related papers
- BIP3D: Bridging 2D Images and 3D Perception for Embodied Intelligence [11.91274849875519]
We introduce a novel image-centric 3D perception model, BIP3D, to overcome the limitations of point-centric methods.
We leverage pre-trained 2D vision foundation models to enhance semantic understanding, and introduce a spatial enhancer module to improve spatial understanding.
In our experiments, BIP3D outperforms current state-of-the-art results on the EmbodiedScan benchmark, achieving improvements of 5.69% in the 3D detection task and 15.25% in the 3D visual grounding task.
arXiv Detail & Related papers (2024-11-22T11:35:42Z) - SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse Views [36.02533658048349]
We propose a novel method, SpaRP, to reconstruct a 3D textured mesh and estimate the relative camera poses for sparse-view images.
SpaRP distills knowledge from 2D diffusion models and finetunes them to implicitly deduce the 3D spatial relationships between the sparse views.
It requires only about 20 seconds to produce a textured mesh and camera poses for the input views.
arXiv Detail & Related papers (2024-08-19T17:53:10Z) - The More You See in 2D, the More You Perceive in 3D [32.578628729549145]
SAP3D is a system for 3D reconstruction and novel view synthesis from an arbitrary number of unposed images.
We show that as the number of input images increases, the performance of our approach improves.
arXiv Detail & Related papers (2024-04-04T17:59:40Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - 3D Moments from Near-Duplicate Photos [67.15199743223332]
3D Moments is a new computational photography effect.
We produce a video that smoothly interpolates the scene motion from the first photo to the second.
Our system produces photorealistic space-time videos with motion parallax and scene dynamics.
arXiv Detail & Related papers (2022-05-12T17:56:18Z) - 3D Photography using Context-aware Layered Depth Inpainting [50.66235795163143]
We propose a method for converting a single RGB-D input image into a 3D photo.
A learning-based inpainting model synthesizes new local color-and-depth content into the occluded region.
The resulting 3D photos can be efficiently rendered with motion parallax.
arXiv Detail & Related papers (2020-04-09T17:59:06Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.