One Ring to Rule Them All: a simple solution to multi-view
3D-Reconstruction of shapes with unknown BRDF via a small Recurrent ResNet
- URL: http://arxiv.org/abs/2104.05014v1
- Date: Sun, 11 Apr 2021 13:39:31 GMT
- Title: One Ring to Rule Them All: a simple solution to multi-view
3D-Reconstruction of shapes with unknown BRDF via a small Recurrent ResNet
- Authors: Ziang Cheng, Hongdong Li, Richard Hartley, Yinqiang Zheng, Imari Sato
- Abstract summary: This paper proposes a simple method which solves an open problem of multi-view 3D-Review for objects with unknown surface materials.
The object can have arbitrary (e.g. non-Lambertian), spatially-varying (or everywhere different) surface reflectances (svBRDF)
Our solution consists of novel-view-synthesis, relighting, material relighting, and shape exchange without additional coding effort.
- Score: 96.11203962525443
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper proposes a simple method which solves an open problem of
multi-view 3D-Reconstruction for objects with unknown and generic surface
materials, imaged by a freely moving camera and a freely moving point light
source. The object can have arbitrary (e.g. non-Lambertian), spatially-varying
(or everywhere different) surface reflectances (svBRDF). Our solution consists
of two smallsized neural networks (dubbed the 'Shape-Net' and 'BRDFNet'), each
having about 1,000 neurons, used to parameterize the unknown shape and unknown
svBRDF, respectively. Key to our method is a special network design (namely, a
ResNet with a global feedback or 'ring' connection), which has a provable
guarantee for finding a valid diffeomorphic shape parameterization. Despite the
underlying problem is highly non-convex hence impractical to solve by
traditional optimization techniques, our method converges reliably to high
quality solutions, even without initialization. Extensive experiments
demonstrate the superiority of our method, and it naturally enables a wide
range of special-effect applications including novel-view-synthesis,
relighting, material retouching, and shape exchange without additional coding
effort. We encourage the reader to view our demo video for better
visualizations.
Related papers
- Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - $PC^2$: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D
Reconstruction [97.06927852165464]
Reconstructing the 3D shape of an object from a single RGB image is a long-standing and highly challenging problem in computer vision.
We propose a novel method for single-image 3D reconstruction which generates a sparse point cloud via a conditional denoising diffusion process.
arXiv Detail & Related papers (2023-02-21T13:37:07Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Point-Based Neural Rendering with Per-View Optimization [5.306819482496464]
We introduce a general approach that is with MVS, but allows further optimization of scene properties in the space of input views.
A key element of our approach is our new differentiable point-based pipeline.
We use these elements together in our neural splatting, that outperforms all previous methods both in quality and speed in almost all scenes we tested.
arXiv Detail & Related papers (2021-09-06T11:19:31Z) - Multi-view 3D Reconstruction of a Texture-less Smooth Surface of Unknown
Generic Reflectance [86.05191217004415]
Multi-view reconstruction of texture-less objects with unknown surface reflectance is a challenging task.
This paper proposes a simple and robust solution to this problem based on a co-light scanner.
arXiv Detail & Related papers (2021-05-25T01:28:54Z) - Recurrent Multi-view Alignment Network for Unsupervised Surface
Registration [79.72086524370819]
Learning non-rigid registration in an end-to-end manner is challenging due to the inherent high degrees of freedom and the lack of labeled training data.
We propose to represent the non-rigid transformation with a point-wise combination of several rigid transformations.
We also introduce a differentiable loss function that measures the 3D shape similarity on the projected multi-view 2D depth images.
arXiv Detail & Related papers (2020-11-24T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.