Diffuse Map Guiding Unsupervised Generative Adversarial Network for
SVBRDF Estimation
- URL: http://arxiv.org/abs/2205.11951v2
- Date: Wed, 25 May 2022 11:58:26 GMT
- Title: Diffuse Map Guiding Unsupervised Generative Adversarial Network for
SVBRDF Estimation
- Authors: Zhiyao Luo, Hongnan Chen
- Abstract summary: This paper presents a Diffuse map guiding material estimation method based on the Generative Adversarial Network(GAN)
This method can predict plausible SVBRDF maps with global features using only a few pictures taken by the mobile phone.
- Score: 0.21756081703276003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing materials in the real world has always been a difficult
problem in computer graphics. Accurately reconstructing the material in the
real world is critical in the field of realistic rendering. Traditionally,
materials in computer graphics are mapped by an artist, then mapped onto a
geometric model by coordinate transformation, and finally rendered with a
rendering engine to get realistic materials. For opaque objects, the industry
commonly uses physical-based bidirectional reflectance distribution function
(BRDF) rendering models for material modeling. The commonly used physical-based
rendering models are Cook-Torrance BRDF, Disney BRDF. In this paper, we use the
Cook-Torrance model to reconstruct the materials. The SVBRDF material
parameters include Normal, Diffuse, Specular and Roughness. This paper presents
a Diffuse map guiding material estimation method based on the Generative
Adversarial Network(GAN). This method can predict plausible SVBRDF maps with
global features using only a few pictures taken by the mobile phone. The main
contributions of this paper are: 1) We preprocess a small number of input
pictures to produce a large number of non-repeating pictures for training to
reduce over-fitting. 2) We use a novel method to directly obtain the guessed
diffuse map with global characteristics, which provides more prior information
for the training process. 3) We improve the network architecture of the
generator so that it can generate fine details of normal maps and reduce the
possibility to generate over-flat normal maps. The method used in this paper
can obtain prior knowledge without using dataset training, which greatly
reduces the difficulty of material reconstruction and saves a lot of time to
generate and calibrate datasets.
Related papers
- GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement [51.97726804507328]
We propose a novel approach for 3D mesh reconstruction from multi-view images.
Our method takes inspiration from large reconstruction models that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images.
arXiv Detail & Related papers (2024-06-09T05:19:24Z) - Data-Independent Operator: A Training-Free Artifact Representation
Extractor for Generalizable Deepfake Detection [105.9932053078449]
In this work, we show that, on the contrary, the small and training-free filter is sufficient to capture more general artifact representations.
Due to its unbias towards both the training and test sources, we define it as Data-Independent Operator (DIO) to achieve appealing improvements on unseen sources.
Our detector achieves a remarkable improvement of $13.3%$, establishing a new state-of-the-art performance.
arXiv Detail & Related papers (2024-03-11T15:22:28Z) - Not All Image Regions Matter: Masked Vector Quantization for
Autoregressive Image Generation [78.13793505707952]
Existing autoregressive models follow the two-stage generation paradigm that first learns a codebook in the latent space for image reconstruction and then completes the image generation autoregressively based on the learned codebook.
We propose a novel two-stage framework, which consists of Masked Quantization VAE (MQ-VAE) Stack model from modeling redundancy.
arXiv Detail & Related papers (2023-05-23T02:15:53Z) - TMO: Textured Mesh Acquisition of Objects with a Mobile Device by using
Differentiable Rendering [54.35405028643051]
We present a new pipeline for acquiring a textured mesh in the wild with a single smartphone.
Our method first introduces an RGBD-aided structure from motion, which can yield filtered depth maps.
We adopt the neural implicit surface reconstruction method, which allows for high-quality mesh.
arXiv Detail & Related papers (2023-03-27T10:07:52Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - PS-NeRF: Neural Inverse Rendering for Multi-view Photometric Stereo [22.42916940712357]
We present a neural inverse rendering method for MVPS based on implicit representation.
Our method achieves far more accurate shape reconstruction than existing MVPS and neural rendering methods.
arXiv Detail & Related papers (2022-07-23T03:55:18Z) - SVBRDF Recovery From a Single Image With Highlights using a Pretrained
Generative Adversarial Network [25.14140648820334]
In this paper, we use an unsupervised generative adversarial neural network (GAN) to recover SVBRDFs maps with a single image as input.
For efficiency, we train the network in two stages: reusing a trained model to initialize the SVBRDFs and fine-tune it based on the input image.
Our method generates high-quality SVBRDFs maps from a single input photograph, and provides more vivid rendering results compared to previous work.
arXiv Detail & Related papers (2021-10-29T10:39:06Z) - Ground material classification and for UAV-based photogrammetric 3D data
A 2D-3D Hybrid Approach [1.3359609092684614]
In recent years, photogrammetry has been widely used in many areas to create 3D virtual data representing the physical environment.
These cutting-edge technologies have caught the US Army and Navy's attention for the purpose of rapid 3D battlefield reconstruction, virtual training, and simulations.
arXiv Detail & Related papers (2021-09-24T22:29:26Z) - Shape From Tracing: Towards Reconstructing 3D Object Geometry and SVBRDF
Material from Images via Differentiable Path Tracing [16.975014467319443]
Differentiable path tracing is an appealing framework as it can reproduce complex appearance effects.
We show how to use differentiable ray tracing to refine an initial coarse mesh and per-mesh-facet material representation.
We also show how to refine initial reconstructions of real-world objects in unconstrained environments.
arXiv Detail & Related papers (2020-12-06T18:55:35Z) - MaterialGAN: Reflectance Capture using a Generative SVBRDF Model [33.578080406338266]
We present MaterialGAN, a deep generative convolutional network based on StyleGAN2.
We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework.
We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone.
arXiv Detail & Related papers (2020-09-30T21:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.