PhotoMat: A Material Generator Learned from Single Flash Photos
- URL: http://arxiv.org/abs/2305.12296v2
- Date: Tue, 23 May 2023 17:26:27 GMT
- Title: PhotoMat: A Material Generator Learned from Single Flash Photos
- Authors: Xilong Zhou, Milo\v{s} Ha\v{s}an, Valentin Deschaintre, Paul Guerrero,
Yannick Hold-Geoffroy, Kalyan Sunkavalli, Nima Khademi Kalantari
- Abstract summary: Previous generative models for materials have been trained exclusively on synthetic data.
We propose PhotoMat: the first material generator trained exclusively on real photos of material samples captured using a cell phone camera with flash.
We show that our generated materials have better visual quality than previous material generators trained on synthetic data.
- Score: 37.42765147463852
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Authoring high-quality digital materials is key to realism in 3D rendering.
Previous generative models for materials have been trained exclusively on
synthetic data; such data is limited in availability and has a visual gap to
real materials. We circumvent this limitation by proposing PhotoMat: the first
material generator trained exclusively on real photos of material samples
captured using a cell phone camera with flash. Supervision on individual
material maps is not available in this setting. Instead, we train a generator
for a neural material representation that is rendered with a learned relighting
module to create arbitrarily lit RGB images; these are compared against real
photos using a discriminator. We then train a material maps estimator to decode
material reflectance properties from the neural material representation. We
train PhotoMat with a new dataset of 12,000 material photos captured with
handheld phone cameras under flash lighting. We demonstrate that our generated
materials have better visual quality than previous material generators trained
on synthetic data. Moreover, we can fit analytical material models to closely
match these generated neural materials, thus allowing for further editing and
use in 3D rendering.
Related papers
- MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors [67.74705555889336]
We introduce MaterialFusion, an enhanced conventional 3D inverse rendering pipeline that incorporates a 2D prior on texture and material properties.
We present StableMaterial, a 2D diffusion model prior that refines multi-lit data to estimate the most likely albedo and material from given input appearances.
We validate MaterialFusion's relighting performance on 4 datasets of synthetic and real objects under diverse illumination conditions.
arXiv Detail & Related papers (2024-09-23T17:59:06Z) - OpenMaterial: A Comprehensive Dataset of Complex Materials for 3D Reconstruction [54.706361479680055]
We introduce the OpenMaterial dataset, comprising 1001 objects made of 295 distinct materials.
OpenMaterial provides comprehensive annotations, including 3D shape, material type, camera pose, depth, and object mask.
It stands as the first large-scale dataset enabling quantitative evaluations of existing algorithms on objects with diverse and challenging materials.
arXiv Detail & Related papers (2024-06-13T07:46:17Z) - MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets [63.284244910964475]
We propose a 3D asset material generation framework to infer underlying material from the 2D semantic prior.
Based on such a prior model, we devise a mechanism to parse material in 3D space.
arXiv Detail & Related papers (2024-04-22T07:00:17Z) - ZeST: Zero-Shot Material Transfer from a Single Image [59.714441587735614]
ZeST is a method for zero-shot material transfer to an object in the input image given a material exemplar image.
We show the application of ZeST to perform multiple edits and robust material assignment under different illuminations.
arXiv Detail & Related papers (2024-04-09T16:15:03Z) - Alchemist: Parametric Control of Material Properties with Diffusion
Models [51.63031820280475]
Our method capitalizes on the generative prior of text-to-image models known for photorealism.
We show the potential application of our model to material edited NeRFs.
arXiv Detail & Related papers (2023-12-05T18:58:26Z) - Material Palette: Extraction of Materials from a Single Image [19.410479434979493]
We propose a method to extract physically-based rendering (PBR) materials from a single real-world image.
We map regions of the image to material concepts using a diffusion model, which allows the sampling of texture images resembling each material in the scene.
Second, we benefit from a separate network to decompose the generated textures into Spatially Varying BRDFs.
arXiv Detail & Related papers (2023-11-28T18:59:58Z) - MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR [29.96046140529936]
We propose Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR (textbfMATLABER)
We train this auto-encoder with large-scale real-world BRDF collections and ensure the smoothness of its latent space.
Our approach demonstrates the superiority over existing ones in generating realistic and coherent object materials.
arXiv Detail & Related papers (2023-08-18T03:40:38Z) - One-shot recognition of any material anywhere using contrastive learning
with physics-based rendering [0.0]
We present MatSim: a synthetic dataset, a benchmark, and a method for computer vision based recognition of similarities and transitions between materials and textures.
The visual recognition of materials is essential to everything from examining food while cooking to inspecting agriculture, chemistry, and industrial products.
arXiv Detail & Related papers (2022-12-01T16:49:53Z) - Diffuse Map Guiding Unsupervised Generative Adversarial Network for
SVBRDF Estimation [0.21756081703276003]
This paper presents a Diffuse map guiding material estimation method based on the Generative Adversarial Network(GAN)
This method can predict plausible SVBRDF maps with global features using only a few pictures taken by the mobile phone.
arXiv Detail & Related papers (2022-05-24T10:32:27Z) - MaterialGAN: Reflectance Capture using a Generative SVBRDF Model [33.578080406338266]
We present MaterialGAN, a deep generative convolutional network based on StyleGAN2.
We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework.
We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone.
arXiv Detail & Related papers (2020-09-30T21:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.