Volumetric Primitives for Modeling and Rendering Scattering and Emissive Media
- URL: http://arxiv.org/abs/2405.15425v1
- Date: Fri, 24 May 2024 10:42:05 GMT
- Title: Volumetric Primitives for Modeling and Rendering Scattering and Emissive Media
- Authors: Jorge Condor, Sebastien Speierer, Lukas Bode, Aljaz Bozic, Simon Green, Piotr Didyk, Adrian Jarabo,
- Abstract summary: We formalize and generalize the modeling of scattering and emissive media using mixtures of simple kernel-based volumetric primitives.
We demonstrate our method as an alternative to other forms of volume modeling for forward and inverse rendering of scattering media.
- Score: 8.792248506305937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a volumetric representation based on primitives to model scattering and emissive media. Accurate scene representations enabling efficient rendering are essential for many computer graphics applications. General and unified representations that can handle surface and volume-based representations simultaneously, allowing for physically accurate modeling, remain a research challenge. Inspired by recent methods for scene reconstruction that leverage mixtures of 3D Gaussians to model radiance fields, we formalize and generalize the modeling of scattering and emissive media using mixtures of simple kernel-based volumetric primitives. We introduce closed-form solutions for transmittance and free-flight distance sampling for 3D Gaussian kernels, and propose several optimizations to use our method efficiently within any off-the-shelf volumetric path tracer by leveraging ray tracing for efficiently querying the medium. We demonstrate our method as an alternative to other forms of volume modeling (e.g. voxel grid-based representations) for forward and inverse rendering of scattering media. Furthermore, we adapt our method to the problem of radiance field optimization and rendering, and demonstrate comparable performance to the state of the art, while providing additional flexibility in terms of performance and usability.
Related papers
- Fast LiDAR Upsampling using Conditional Diffusion Models [1.3709133749179265]
Existing approaches have shown the possibilities for using diffusion models to generate refined LiDAR data with high fidelity.
We introduce a novel approach based on conditional diffusion models for fast and high-quality sparse-to-dense upsampling of 3D scene point clouds.
Our method employs denoising diffusion probabilistic models trained with conditional inpainting masks, which have been shown to give high performance on image completion tasks.
arXiv Detail & Related papers (2024-05-08T08:38:28Z) - A real-time rendering method for high albedo anisotropic materials with
multiple scattering [0.4297070083645048]
This paper uses neural networks to simulate the iterative integration process of solving the radiative transfer equation.
This method can achieve realistic volumetric media rendering effects and greatly increase the rendering speed.
arXiv Detail & Related papers (2024-01-25T10:08:53Z) - GIR: 3D Gaussian Inverse Rendering for Relightable Scene Factorization [76.52007427483396]
GIR is a 3D Gaussian Inverse Rendering method for relightable scene factorization.
Our method utilizes 3D Gaussians to estimate the material properties, illumination, and geometry of an object from multi-view images.
arXiv Detail & Related papers (2023-12-08T16:05:15Z) - ACDMSR: Accelerated Conditional Diffusion Models for Single Image
Super-Resolution [84.73658185158222]
We propose a diffusion model-based super-resolution method called ACDMSR.
Our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process.
Our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
arXiv Detail & Related papers (2023-07-03T06:49:04Z) - A prior regularized full waveform inversion using generative diffusion
models [0.5156484100374059]
Full waveform inversion (FWI) has the potential to provide high-resolution subsurface model estimations.
Due to limitations in observation, e.g., regional noise, limited shots or receivers, and band-limited data, it is hard to obtain the desired high-resolution model with FWI.
We propose a new paradigm for FWI regularized by generative diffusion models.
arXiv Detail & Related papers (2023-06-22T10:10:34Z) - A Variational Perspective on Solving Inverse Problems with Diffusion
Models [101.831766524264]
Inverse tasks can be formulated as inferring a posterior distribution over data.
This is however challenging in diffusion models since the nonlinear and iterative nature of the diffusion process renders the posterior intractable.
We propose a variational approach that by design seeks to approximate the true posterior distribution.
arXiv Detail & Related papers (2023-05-07T23:00:47Z) - TensoIR: Tensorial Inverse Rendering [51.57268311847087]
TensoIR is a novel inverse rendering approach based on tensor factorization and neural fields.
TensoRF is a state-of-the-art approach for radiance field modeling.
arXiv Detail & Related papers (2023-04-24T21:39:13Z) - MeshDiffusion: Score-based Generative 3D Mesh Modeling [68.40770889259143]
We consider the task of generating realistic 3D shapes for automatic scene generation and physical simulation.
We take advantage of the graph structure of meshes and use a simple yet very effective generative modeling method to generate 3D meshes.
Specifically, we represent meshes with deformable tetrahedral grids, and then train a diffusion model on this direct parametrization.
arXiv Detail & Related papers (2023-03-14T17:59:01Z) - Online Neural Path Guiding with Normalized Anisotropic Spherical
Gaussians [20.68953631807367]
We propose a novel online framework to learn the spatial-varying density model with a single small neural network.
Our framework learns the distribution in a progressive manner and does not need any warm-up phases.
arXiv Detail & Related papers (2023-03-11T05:22:42Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.