A real-time rendering method for high albedo anisotropic materials with
multiple scattering
- URL: http://arxiv.org/abs/2401.14051v1
- Date: Thu, 25 Jan 2024 10:08:53 GMT
- Title: A real-time rendering method for high albedo anisotropic materials with
multiple scattering
- Authors: Shun Fang, Xing Feng, Ming Cui
- Abstract summary: This paper uses neural networks to simulate the iterative integration process of solving the radiative transfer equation.
This method can achieve realistic volumetric media rendering effects and greatly increase the rendering speed.
- Score: 0.4297070083645048
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose a neural network-based real-time volume rendering method for
realistic and efficient rendering of volumetric media. The traditional volume
rendering method uses path tracing to solve the radiation transfer equation,
which requires a huge amount of calculation and cannot achieve real-time
rendering. Therefore, this paper uses neural networks to simulate the iterative
integration process of solving the radiative transfer equation to speed up the
volume rendering of volume media. Specifically, the paper first performs data
processing on the volume medium to generate a variety of sampling features,
including density features, transmittance features and phase features. The
hierarchical transmittance fields are fed into a 3D-CNN network to compute more
important transmittance features. Secondly, the diffuse reflection sampling
template and the highlight sampling template are used to layer the three types
of sampling features into the network. This method can pay more attention to
light scattering, highlights and shadows, and then select important channel
features through the attention module. Finally, the scattering distribution of
the center points of all sampling templates is predicted through the backbone
neural network. This method can achieve realistic volumetric media rendering
effects and greatly increase the rendering speed while maintaining rendering
quality, which is of great significance for real-time rendering applications.
Experimental results indicate that our method outperforms previous methods.
Related papers
- Volumetric Primitives for Modeling and Rendering Scattering and Emissive Media [8.792248506305937]
We formalize and generalize the modeling of scattering and emissive media using mixtures of simple kernel-based volumetric primitives.
We demonstrate our method as an alternative to other forms of volume modeling for forward and inverse rendering of scattering media.
arXiv Detail & Related papers (2024-05-24T10:42:05Z) - Fast LiDAR Upsampling using Conditional Diffusion Models [1.3709133749179265]
Existing approaches have shown the possibilities for using diffusion models to generate refined LiDAR data with high fidelity.
We introduce a novel approach based on conditional diffusion models for fast and high-quality sparse-to-dense upsampling of 3D scene point clouds.
Our method employs denoising diffusion probabilistic models trained with conditional inpainting masks, which have been shown to give high performance on image completion tasks.
arXiv Detail & Related papers (2024-05-08T08:38:28Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - DDNeRF: Depth Distribution Neural Radiance Fields [12.283891012446647]
Deep distribution neural radiance field (DDNeRF) is a new method that significantly increases sampling efficiency along rays during training.
We train a coarse model to predict the internal distribution of the transparency of an input volume in addition to the volume's total density.
This finer distribution then guides the sampling procedure of the fine model.
arXiv Detail & Related papers (2022-03-30T19:21:07Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - Volume Rendering of Neural Implicit Surfaces [57.802056954935495]
This paper aims to improve geometry representation and reconstruction in neural volume rendering.
We achieve that by modeling the volume density as a function of the geometry.
Applying this new density representation to challenging scene multiview datasets produced high quality geometry reconstructions.
arXiv Detail & Related papers (2021-06-22T20:23:16Z) - Spatial-Spectral Residual Network for Hyperspectral Image
Super-Resolution [82.1739023587565]
We propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet)
Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information.
In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train.
arXiv Detail & Related papers (2020-01-14T03:34:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.