Material Palette: Extraction of Materials from a Single Image
- URL: http://arxiv.org/abs/2311.17060v1
- Date: Tue, 28 Nov 2023 18:59:58 GMT
- Title: Material Palette: Extraction of Materials from a Single Image
- Authors: Ivan Lopes and Fabio Pizzati and Raoul de Charette
- Abstract summary: We propose a method to extract physically-based rendering (PBR) materials from a single real-world image.
We map regions of the image to material concepts using a diffusion model, which allows the sampling of texture images resembling each material in the scene.
Second, we benefit from a separate network to decompose the generated textures into Spatially Varying BRDFs.
- Score: 19.410479434979493
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a method to extract physically-based rendering
(PBR) materials from a single real-world image. We do so in two steps: first,
we map regions of the image to material concepts using a diffusion model, which
allows the sampling of texture images resembling each material in the scene.
Second, we benefit from a separate network to decompose the generated textures
into Spatially Varying BRDFs (SVBRDFs), providing us with materials ready to be
used in rendering applications. Our approach builds on existing synthetic
material libraries with SVBRDF ground truth, but also exploits a
diffusion-generated RGB texture dataset to allow generalization to new samples
using unsupervised domain adaptation (UDA). Our contributions are thoroughly
evaluated on synthetic and real-world datasets. We further demonstrate the
applicability of our method for editing 3D scenes with materials estimated from
real photographs. The code and models will be made open-source. Project page:
https://astra-vision.github.io/MaterialPalette/
Related papers
- MaterialFusion: Enhancing Inverse Rendering with Material Diffusion Priors [67.74705555889336]
We introduce MaterialFusion, an enhanced conventional 3D inverse rendering pipeline that incorporates a 2D prior on texture and material properties.
We present StableMaterial, a 2D diffusion model prior that refines multi-lit data to estimate the most likely albedo and material from given input appearances.
We validate MaterialFusion's relighting performance on 4 datasets of synthetic and real objects under diverse illumination conditions.
arXiv Detail & Related papers (2024-09-23T17:59:06Z) - Vastextures: Vast repository of textures and PBR materials extracted from real-world images using unsupervised methods [0.6993026261767287]
Vastextures is a repository of 500,000 textures and PBR materials extracted from real-world images using an unsupervised process.
The repository is composed of 2D textures cropped from natural images and SVBRDF/PBR materials generated from these textures.
arXiv Detail & Related papers (2024-06-24T21:36:01Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - MaPa: Text-driven Photorealistic Material Painting for 3D Shapes [80.66880375862628]
This paper aims to generate materials for 3D meshes from text descriptions.
Unlike existing methods that synthesize texture maps, we propose to generate segment-wise procedural material graphs.
Our framework supports high-quality rendering and provides substantial flexibility in editing.
arXiv Detail & Related papers (2024-04-26T17:54:38Z) - MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets [63.284244910964475]
We propose a 3D asset material generation framework to infer underlying material from the 2D semantic prior.
Based on such a prior model, we devise a mechanism to parse material in 3D space.
arXiv Detail & Related papers (2024-04-22T07:00:17Z) - ConTex-Human: Free-View Rendering of Human from a Single Image with
Texture-Consistent Synthesis [49.28239918969784]
We introduce a texture-consistent back view synthesis module that could transfer the reference image content to the back view.
We also propose a visibility-aware patch consistency regularization for texture mapping and refinement combined with the synthesized back view texture.
arXiv Detail & Related papers (2023-11-28T13:55:53Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - One-shot recognition of any material anywhere using contrastive learning
with physics-based rendering [0.0]
We present MatSim: a synthetic dataset, a benchmark, and a method for computer vision based recognition of similarities and transitions between materials and textures.
The visual recognition of materials is essential to everything from examining food while cooking to inspecting agriculture, chemistry, and industrial products.
arXiv Detail & Related papers (2022-12-01T16:49:53Z) - Deep scene-scale material estimation from multi-view indoor captures [9.232860902853048]
We present a learning-based approach that automatically produces digital assets ready for physically-based rendering.
Our method generates approximate material maps in a fraction of time compared to the closest previous solutions.
arXiv Detail & Related papers (2022-11-15T10:58:28Z) - MaterialGAN: Reflectance Capture using a Generative SVBRDF Model [33.578080406338266]
We present MaterialGAN, a deep generative convolutional network based on StyleGAN2.
We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework.
We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone.
arXiv Detail & Related papers (2020-09-30T21:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.