MaterialGAN: Reflectance Capture using a Generative SVBRDF Model
- URL: http://arxiv.org/abs/2010.00114v1
- Date: Wed, 30 Sep 2020 21:33:00 GMT
- Title: MaterialGAN: Reflectance Capture using a Generative SVBRDF Model
- Authors: Yu Guo, Cameron Smith, Milo\v{s} Ha\v{s}an, Kalyan Sunkavalli and
Shuang Zhao
- Abstract summary: We present MaterialGAN, a deep generative convolutional network based on StyleGAN2.
We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework.
We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone.
- Score: 33.578080406338266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the problem of reconstructing spatially-varying BRDFs from a small
set of image measurements. This is a fundamentally under-constrained problem,
and previous work has relied on using various regularization priors or on
capturing many images to produce plausible results. In this work, we present
MaterialGAN, a deep generative convolutional network based on StyleGAN2,
trained to synthesize realistic SVBRDF parameter maps. We show that MaterialGAN
can be used as a powerful material prior in an inverse rendering framework: we
optimize in its latent representation to generate material maps that match the
appearance of the captured images when rendered. We demonstrate this framework
on the task of reconstructing SVBRDFs from images captured under flash
illumination using a hand-held mobile phone. Our method succeeds in producing
plausible material maps that accurately reproduce the target images, and
outperforms previous state-of-the-art material capture methods in evaluations
on both synthetic and real data. Furthermore, our GAN-based latent space allows
for high-level semantic material editing operations such as generating material
variations and material morphing.
Related papers
- MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors [67.74705555889336]
We introduce MaterialFusion, an enhanced conventional 3D inverse rendering pipeline that incorporates a 2D prior on texture and material properties.
We present StableMaterial, a 2D diffusion model prior that refines multi-lit data to estimate the most likely albedo and material from given input appearances.
We validate MaterialFusion's relighting performance on 4 datasets of synthetic and real objects under diverse illumination conditions.
arXiv Detail & Related papers (2024-09-23T17:59:06Z) - MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets [63.284244910964475]
We propose a 3D asset material generation framework to infer underlying material from the 2D semantic prior.
Based on such a prior model, we devise a mechanism to parse material in 3D space.
arXiv Detail & Related papers (2024-04-22T07:00:17Z) - IntrinsicAnything: Learning Diffusion Priors for Inverse Rendering Under Unknown Illumination [37.96484120807323]
This paper aims to recover object materials from posed images captured under an unknown static lighting condition.
We learn the material prior with a generative model for regularizing the optimization process.
Experiments on real-world and synthetic datasets demonstrate that our approach achieves state-of-the-art performance on material recovery.
arXiv Detail & Related papers (2024-04-17T17:45:08Z) - Intrinsic Image Diffusion for Indoor Single-view Material Estimation [55.276815106443976]
We present Intrinsic Image Diffusion, a generative model for appearance decomposition of indoor scenes.
Given a single input view, we sample multiple possible material explanations represented as albedo, roughness, and metallic maps.
Our method produces significantly sharper, more consistent, and more detailed materials, outperforming state-of-the-art methods by $1.5dB$ on PSNR and by $45%$ better FID score on albedo prediction.
arXiv Detail & Related papers (2023-12-19T15:56:19Z) - Material Palette: Extraction of Materials from a Single Image [19.410479434979493]
We propose a method to extract physically-based rendering (PBR) materials from a single real-world image.
We map regions of the image to material concepts using a diffusion model, which allows the sampling of texture images resembling each material in the scene.
Second, we benefit from a separate network to decompose the generated textures into Spatially Varying BRDFs.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - Diffuse Map Guiding Unsupervised Generative Adversarial Network for
SVBRDF Estimation [0.21756081703276003]
This paper presents a Diffuse map guiding material estimation method based on the Generative Adversarial Network(GAN)
This method can predict plausible SVBRDF maps with global features using only a few pictures taken by the mobile phone.
arXiv Detail & Related papers (2022-05-24T10:32:27Z) - Ground material classification and for UAV-based photogrammetric 3D data
A 2D-3D Hybrid Approach [1.3359609092684614]
In recent years, photogrammetry has been widely used in many areas to create 3D virtual data representing the physical environment.
These cutting-edge technologies have caught the US Army and Navy's attention for the purpose of rapid 3D battlefield reconstruction, virtual training, and simulations.
arXiv Detail & Related papers (2021-09-24T22:29:26Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z) - Two-shot Spatially-varying BRDF and Shape Estimation [89.29020624201708]
We propose a novel deep learning architecture with a stage-wise estimation of shape and SVBRDF.
We create a large-scale synthetic training dataset with domain-randomized geometry and realistic materials.
Experiments on both synthetic and real-world datasets show that our network trained on a synthetic dataset can generalize well to real-world images.
arXiv Detail & Related papers (2020-04-01T12:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.