SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary
Image collections
- URL: http://arxiv.org/abs/2205.15768v1
- Date: Tue, 31 May 2022 13:16:48 GMT
- Title: SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary
Image collections
- Authors: Mark Boss, Andreas Engelhardt, Abhishek Kar, Yuanzhen Li, Deqing Sun,
Jonathan T. Barron, Hendrik P. A. Lensch, Varun Jampani
- Abstract summary: Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics.
We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination.
Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR.
- Score: 49.3480550339732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inverse rendering of an object under entirely unknown capture conditions is a
fundamental challenge in computer vision and graphics. Neural approaches such
as NeRF have achieved photorealistic results on novel view synthesis, but they
require known camera poses. Solving this problem with unknown camera poses is
highly challenging as it requires joint optimization over shape, radiance, and
pose. This problem is exacerbated when the input images are captured in the
wild with varying backgrounds and illuminations. Standard pose estimation
techniques fail in such image collections in the wild due to very few estimated
correspondences across images. Furthermore, NeRF cannot relight a scene under
any illumination, as it operates on radiance (the product of reflectance and
illumination). We propose a joint optimization framework to estimate the shape,
BRDF, and per-image camera pose and illumination. Our method works on
in-the-wild online image collections of an object and produces relightable 3D
assets for several use-cases such as AR/VR. To our knowledge, our method is the
first to tackle this severely unconstrained task with minimal user interaction.
Project page: https://markboss.me/publication/2022-samurai/ Video:
https://youtu.be/LlYuGDjXp-8
Related papers
- IllumiNeRF: 3D Relighting Without Inverse Rendering [25.642960820693947]
We show how to relight each input image using an image diffusion model conditioned on target environment lighting and estimated object geometry.
We reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting.
We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks.
arXiv Detail & Related papers (2024-06-10T17:59:59Z) - GaNI: Global and Near Field Illumination Aware Neural Inverse Rendering [21.584362527926654]
GaNI can reconstruct geometry, albedo, and roughness parameters from images of a scene captured with co-located light and camera.
Existing inverse rendering techniques with co-located light-camera focus on single objects only.
arXiv Detail & Related papers (2024-03-22T23:47:19Z) - SHINOBI: Shape and Illumination using Neural Object Decomposition via BRDF Optimization In-the-wild [76.21063993398451]
Inverse rendering of an object based on unconstrained image collections is a long-standing challenge in computer vision and graphics.
We show that an implicit shape representation based on a multi-resolution hash encoding enables faster and robust shape reconstruction.
Our method is class-agnostic and works on in-the-wild image collections of objects to produce relightable 3D assets.
arXiv Detail & Related papers (2024-01-18T18:01:19Z) - NoPe-NeRF: Optimising Neural Radiance Field with No Pose Prior [22.579857008706206]
Training a Neural Radiance Field (NeRF) without pre-computed camera poses is challenging.
Recent advances in this direction demonstrate the possibility of jointly optimising a NeRF and camera poses in forward-facing scenes.
We tackle this challenging problem by incorporating undistorted monocular depth priors.
These priors are generated by correcting scale and shift parameters during training, with which we are then able to constrain the relative poses between consecutive frames.
arXiv Detail & Related papers (2022-12-14T18:16:41Z) - SPARF: Neural Radiance Fields from Sparse and Noisy Poses [58.528358231885846]
We introduce Sparse Pose Adjusting Radiance Field (SPARF) to address the challenge of novel-view synthesis.
Our approach exploits multi-view geometry constraints in order to jointly learn the NeRF and refine the camera poses.
arXiv Detail & Related papers (2022-11-21T18:57:47Z) - Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition [50.94535765549819]
Decomposing a scene into its shape, reflectance and illumination is a fundamental problem in computer vision and graphics.
We propose a novel reflectance decomposition network that can estimate shape, BRDF, and per-image illumination.
Our decompositions can result in considerably better BRDF and light estimates enabling more accurate novel view-synthesis and relighting.
arXiv Detail & Related papers (2021-10-27T12:17:47Z) - PhotoApp: Photorealistic Appearance Editing of Head Portraits [97.23638022484153]
We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination in a portrait image.
Most editing approaches rely on supervised learning using training data captured with setups such as light and camera stages.
We design a supervised problem which learns in the latent space of StyleGAN.
This combines the best of supervised learning and generative adversarial modeling.
arXiv Detail & Related papers (2021-03-13T08:59:49Z) - NeRD: Neural Reflectance Decomposition from Image Collections [50.945357655498185]
NeRD is a method that achieves this decomposition by introducing physically-based rendering to neural radiance fields.
Even challenging non-Lambertian reflectances, complex geometry, and unknown illumination can be decomposed to high-quality models.
arXiv Detail & Related papers (2020-12-07T18:45:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.