Generating Parametric BRDFs from Natural Language Descriptions
- URL: http://arxiv.org/abs/2306.15679v2
- Date: Thu, 14 Sep 2023 12:07:40 GMT
- Title: Generating Parametric BRDFs from Natural Language Descriptions
- Authors: Sean Memery, Osmar Cedron, Kartic Subr
- Abstract summary: We develop a model to generate Bidirectional Reflectance Distribution Functions from descriptive prompts.
BRDFs are four dimensional probability distributions that characterize the interaction of light with surface materials.
Our model is first trained using a semi-supervised approach before being tuned via an unsupervised scheme.
- Score: 1.1847636087764204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artistic authoring of 3D environments is a laborious enterprise that also
requires skilled content creators. There have been impressive improvements in
using machine learning to address different aspects of generating 3D content,
such as generating meshes, arranging geometry, synthesizing textures, etc. In
this paper we develop a model to generate Bidirectional Reflectance
Distribution Functions (BRDFs) from descriptive textual prompts. BRDFs are four
dimensional probability distributions that characterize the interaction of
light with surface materials. They are either represented parametrically, or by
tabulating the probability density associated with every pair of incident and
outgoing angles. The former lends itself to artistic editing while the latter
is used when measuring the appearance of real materials. Numerous works have
focused on hypothesizing BRDF models from images of materials. We learn a
mapping from textual descriptions of materials to parametric BRDFs. Our model
is first trained using a semi-supervised approach before being tuned via an
unsupervised scheme. Although our model is general, in this paper we
specifically generate parameters for MDL materials, conditioned on natural
language descriptions, within NVIDIA's Omniverse platform. This enables use
cases such as real-time text prompts to change materials of objects in 3D
environments such as "dull plastic" or "shiny iron". Since the output of our
model is a parametric BRDF, rather than an image of the material, it may be
used to render materials using any shape under arbitrarily specified viewing
and lighting conditions.
Related papers
- Boosting 3D Object Generation through PBR Materials [32.732511476490316]
We propose a novel approach to boost the quality of generated 3D objects from the perspective of Physics-Based Rendering (PBR) materials.
For albedo and bump maps, we leverage Stable Diffusion fine-tuned on synthetic data to extract these values.
In terms of roughness and metalness maps, we adopt a semi-automatic process to provide room for interactive adjustment.
arXiv Detail & Related papers (2024-11-25T04:20:52Z) - MaPa: Text-driven Photorealistic Material Painting for 3D Shapes [80.66880375862628]
This paper aims to generate materials for 3D meshes from text descriptions.
Unlike existing methods that synthesize texture maps, we propose to generate segment-wise procedural material graphs.
Our framework supports high-quality rendering and provides substantial flexibility in editing.
arXiv Detail & Related papers (2024-04-26T17:54:38Z) - Make-it-Real: Unleashing Large Multimodal Model for Painting 3D Objects with Realistic Materials [108.59709545364395]
GPT-4V can effectively recognize and describe materials, allowing the construction of a detailed material library.
The correctly matched materials are then meticulously applied as reference for the new SVBRDF material generation.
Make-it-Real offers a streamlined integration into the 3D content creation workflow.
arXiv Detail & Related papers (2024-04-25T17:59:58Z) - MaterialSeg3D: Segmenting Dense Materials from 2D Priors for 3D Assets [63.284244910964475]
We propose a 3D asset material generation framework to infer underlying material from the 2D semantic prior.
Based on such a prior model, we devise a mechanism to parse material in 3D space.
arXiv Detail & Related papers (2024-04-22T07:00:17Z) - Feature Splatting: Language-Driven Physics-Based Scene Synthesis and Editing [11.46530458561589]
We introduce Feature Splatting, an approach that unifies physics-based dynamic scene synthesis with rich semantics.
Our first contribution is a way to distill high-quality, object-centric vision-language features into 3D Gaussians.
Our second contribution is a way to synthesize physics-based dynamics from an otherwise static scene using a particle-based simulator.
arXiv Detail & Related papers (2024-04-01T16:31:04Z) - L3GO: Language Agents with Chain-of-3D-Thoughts for Generating
Unconventional Objects [53.4874127399702]
We propose a language agent with chain-of-3D-thoughts (L3GO), an inference-time approach that can reason about part-based 3D mesh generation.
We develop a new benchmark, Unconventionally Feasible Objects (UFO), as well as SimpleBlenv, a wrapper environment built on top of Blender.
Our approach surpasses the standard GPT-4 and other language agents for 3D mesh generation on ShapeNet.
arXiv Detail & Related papers (2024-02-14T09:51:05Z) - MATLABER: Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR [29.96046140529936]
We propose Material-Aware Text-to-3D via LAtent BRDF auto-EncodeR (textbfMATLABER)
We train this auto-encoder with large-scale real-world BRDF collections and ensure the smoothness of its latent space.
Our approach demonstrates the superiority over existing ones in generating realistic and coherent object materials.
arXiv Detail & Related papers (2023-08-18T03:40:38Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Generative Modelling of BRDF Textures from Flash Images [50.660026124025265]
We learn a latent space for easy capture, semantic editing, consistent, and efficient reproduction of visual material appearance.
In a second step, conditioned on the material code, our method produces an infinite and diverse spatial field of BRDF model parameters.
arXiv Detail & Related papers (2021-02-23T18:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.