Deep Appearance Prefiltering
- URL: http://arxiv.org/abs/2211.05932v1
- Date: Tue, 8 Nov 2022 16:42:25 GMT
- Title: Deep Appearance Prefiltering
- Authors: Steve Bako, Pradeep Sen, Anton Kaplanyan
- Abstract summary: An ideal level of detail (LoD) method is to make rendering costs independent of the 3D scene complexity, while preserving the appearance of the scene.
We propose the first comprehensive multi-scale LoD framework for prefiltering 3D environments with complex geometry and materials.
We demonstrate that our approach compares favorably to state-of-the-art prefiltering methods and achieves considerable savings in memory for complex scenes.
- Score: 11.986267753557994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physically based rendering of complex scenes can be prohibitively costly with
a potentially unbounded and uneven distribution of complexity across the
rendered image. The goal of an ideal level of detail (LoD) method is to make
rendering costs independent of the 3D scene complexity, while preserving the
appearance of the scene. However, current prefiltering LoD methods are limited
in the appearances they can support due to their reliance of approximate models
and other heuristics. We propose the first comprehensive multi-scale LoD
framework for prefiltering 3D environments with complex geometry and materials
(e.g., the Disney BRDF), while maintaining the appearance with respect to the
ray-traced reference. Using a multi-scale hierarchy of the scene, we perform a
data-driven prefiltering step to obtain an appearance phase function and
directional coverage mask at each scale. At the heart of our approach is a
novel neural representation that encodes this information into a compact latent
form that is easy to decode inside a physically based renderer. Once a scene is
baked out, our method requires no original geometry, materials, or textures at
render time. We demonstrate that our approach compares favorably to
state-of-the-art prefiltering methods and achieves considerable savings in
memory for complex scenes.
Related papers
- SceneCraft: Layout-Guided 3D Scene Generation [29.713491313796084]
SceneCraft is a novel method for generating detailed indoor scenes that adhere to textual descriptions and spatial layout preferences.
Our method significantly outperforms existing approaches in complex indoor scene generation with diverse textures, consistent geometry, and realistic visual quality.
arXiv Detail & Related papers (2024-10-11T17:59:58Z) - EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - Efficient Scene Appearance Aggregation for Level-of-Detail Rendering [42.063285161104474]
We present a novel volumetric representation for the aggregated appearance of complex scenes.
We tackle the challenge of capturing the correlation existing locally within a voxel and globally across different parts of the scene.
arXiv Detail & Related papers (2024-08-19T01:01:12Z) - Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - BerfScene: Bev-conditioned Equivariant Radiance Fields for Infinite 3D
Scene Generation [96.58789785954409]
We propose a practical and efficient 3D representation that incorporates an equivariant radiance field with the guidance of a bird's-eye view map.
We produce large-scale, even infinite-scale, 3D scenes via synthesizing local scenes and then stitching them with smooth consistency.
arXiv Detail & Related papers (2023-12-04T18:56:10Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Blocks2World: Controlling Realistic Scenes with Editable Primitives [5.541644538483947]
We present Blocks2World, a novel method for 3D scene rendering and editing.
Our technique begins by extracting 3D parallelepipeds from various objects in a given scene using convex decomposition.
The next stage involves training a conditioned model that learns to generate images from the 2D-rendered convex primitives.
arXiv Detail & Related papers (2023-07-07T21:38:50Z) - 3inGAN: Learning a 3D Generative Model from Images of a Self-similar
Scene [34.2144933185175]
3inGAN is an unconditional 3D generative model trained from 2D images of a single self-similar 3D scene.
We show results on semi-stochastic scenes of varying scale and complexity, obtained from real and synthetic sources.
arXiv Detail & Related papers (2022-11-27T18:03:21Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.