Large Material Gaussian Model for Relightable 3D Generation
- URL: http://arxiv.org/abs/2509.22112v1
- Date: Fri, 26 Sep 2025 09:35:12 GMT
- Title: Large Material Gaussian Model for Relightable 3D Generation
- Authors: Jingrui Ye, Lingting Zhu, Runze Zhang, Zeyu Hu, Yingda Yin, Lanjiong Li, Lequan Yu, Qingmin Liao,
- Abstract summary: We introduce a novel framework designed to generate high-quality 3D content with Physically Based Rendering (PBR) materials.<n>Our method not only exhibit greater visual appeal compared to baseline methods but also enhance material modeling, thereby enabling practical downstream rendering applications.
- Score: 54.10879517395551
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing demand for 3D assets across various industries necessitates efficient and automated methods for 3D content creation. Leveraging 3D Gaussian Splatting, recent large reconstruction models (LRMs) have demonstrated the ability to efficiently achieve high-quality 3D rendering by integrating multiview diffusion for generation and scalable transformers for reconstruction. However, existing models fail to produce the material properties of assets, which is crucial for realistic rendering in diverse lighting environments. In this paper, we introduce the Large Material Gaussian Model (MGM), a novel framework designed to generate high-quality 3D content with Physically Based Rendering (PBR) materials, ie, albedo, roughness, and metallic properties, rather than merely producing RGB textures with uncontrolled light baking. Specifically, we first fine-tune a new multiview material diffusion model conditioned on input depth and normal maps. Utilizing the generated multiview PBR images, we explore a Gaussian material representation that not only aligns with 2D Gaussian Splatting but also models each channel of the PBR materials. The reconstructed point clouds can then be rendered to acquire PBR attributes, enabling dynamic relighting by applying various ambient light maps. Extensive experiments demonstrate that the materials produced by our method not only exhibit greater visual appeal compared to baseline methods but also enhance material modeling, thereby enabling practical downstream rendering applications.
Related papers
- SViM3D: Stable Video Material Diffusion for Single Image 3D Generation [48.986972061812004]
Video diffusion models have been successfully used to reconstruct 3D objects from a single image efficiently.<n>We extend a latent video diffusion model to output spatially varying PBR parameters and surface normals jointly with each generated view based on explicit camera control.<n>This unique setup allows for relighting and generating a 3D asset using our model as neural prior.
arXiv Detail & Related papers (2025-10-09T14:29:47Z) - PBR3DGen: A VLM-guided Mesh Generation with High-quality PBR Texture [9.265778497001843]
We present PBR3DGen, a two-stage mesh generation method with high-quality PBR materials.<n>We leverage vision language models (VLM) to guide multi-view diffusion, precisely capturing the spatial distribution and inherent attributes of reflective-metalness material.<n>Our reconstruction model reconstructs high-quality mesh with PBR materials.
arXiv Detail & Related papers (2025-03-14T13:11:19Z) - MaterialMVP: Illumination-Invariant Material Generation via Multi-view PBR Diffusion [37.596740171045845]
Physically-based rendering (PBR) has become a cornerstone in modern computer graphics, enabling realistic material representation and lighting interactions in 3D scenes.<n>We present a novel end-to-end model for generating PBR textures from 3D meshes and image prompts, addressing key challenges in multi-view material synthesis.
arXiv Detail & Related papers (2025-03-13T11:57:30Z) - BEAM: Bridging Physically-based Rendering and Gaussian Modeling for Relightable Volumetric Video [58.97416204208624]
We present BEAM, a novel pipeline that bridges 4D Gaussian representations with physically-based rendering (PBR) to produce high-quality, relightable videos.<n>By offering realistic, lifelike visualizations under diverse lighting conditions, BEAM opens new possibilities for interactive entertainment, storytelling, and creative visualization.
arXiv Detail & Related papers (2025-02-12T10:58:09Z) - TexGaussian: Generating High-quality PBR Material via Octree-based 3D Gaussian Splatting [48.97819552366636]
This paper presents TexGaussian, a novel method that uses octant-aligned 3D Gaussian Splatting for rapid PBR material generation.<n>Our method synthesizes more visually pleasing PBR materials and runs faster than previous methods in both unconditional and text-conditional scenarios.
arXiv Detail & Related papers (2024-11-29T12:19:39Z) - Boosting 3D Object Generation through PBR Materials [32.732511476490316]
We propose a novel approach to boost the quality of generated 3D objects from the perspective of Physics-Based Rendering (PBR) materials.
For albedo and bump maps, we leverage Stable Diffusion fine-tuned on synthetic data to extract these values.
In terms of roughness and metalness maps, we adopt a semi-automatic process to provide room for interactive adjustment.
arXiv Detail & Related papers (2024-11-25T04:20:52Z) - Edify 3D: Scalable High-Quality 3D Asset Generation [53.86838858460809]
Edify 3D is an advanced solution designed for high-quality 3D asset generation.
Our method can generate high-quality 3D assets with detailed geometry, clean shape topologies, high-resolution textures, and materials within 2 minutes of runtime.
arXiv Detail & Related papers (2024-11-11T17:07:43Z) - 3DTopia-XL: Scaling High-quality 3D Asset Generation via Primitive Diffusion [86.25111098482537]
We introduce 3DTopia-XL, a scalable native 3D generative model designed to overcome limitations of existing methods.<n>3DTopia-XL leverages a novel primitive-based 3D representation, PrimX, which encodes detailed shape, albedo, and material field into a compact tensorial format.<n>On top of the novel representation, we propose a generative framework based on Diffusion Transformer (DiT), which comprises 1) Primitive Patch Compression, 2) and Latent Primitive Diffusion.<n>We conduct extensive qualitative and quantitative experiments to demonstrate that 3DTopia-XL significantly outperforms existing methods in generating high-
arXiv Detail & Related papers (2024-09-19T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.