ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for
Neural Radiance Field
- URL: http://arxiv.org/abs/2303.13817v1
- Date: Fri, 24 Mar 2023 05:34:39 GMT
- Title: ABLE-NeRF: Attention-Based Rendering with Learnable Embeddings for
Neural Radiance Field
- Authors: Zhe Jun Tang, Tat-Jen Cham, Haiyu Zhao
- Abstract summary: We present an alternative to the physics-based VR approach by introducing a self-attention-based framework on volumes along a ray.
Our method, which we call ABLE-NeRF, significantly reduces blurry' glossy surfaces in rendering and produces realistic translucent surfaces which lack in prior art.
- Score: 20.986012773294714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Field (NeRF) is a popular method in representing 3D scenes by
optimising a continuous volumetric scene function. Its large success which lies
in applying volumetric rendering (VR) is also its Achilles' heel in producing
view-dependent effects. As a consequence, glossy and transparent surfaces often
appear murky. A remedy to reduce these artefacts is to constrain this VR
equation by excluding volumes with back-facing normal. While this approach has
some success in rendering glossy surfaces, translucent objects are still poorly
represented. In this paper, we present an alternative to the physics-based VR
approach by introducing a self-attention-based framework on volumes along a
ray. In addition, inspired by modern game engines which utilise Light Probes to
store local lighting passing through the scene, we incorporate Learnable
Embeddings to capture view dependent effects within the scene. Our method,
which we call ABLE-NeRF, significantly reduces `blurry' glossy surfaces in
rendering and produces realistic translucent surfaces which lack in prior art.
In the Blender dataset, ABLE-NeRF achieves SOTA results and surpasses Ref-NeRF
in all 3 image quality metrics PSNR, SSIM, LPIPS.
Related papers
- VR-Splatting: Foveated Radiance Field Rendering via 3D Gaussian Splatting and Neural Points [4.962171160815189]
High-performance demands of virtual reality systems present challenges in utilizing fast-to-render scene representations like 3DGS.
We propose foveated rendering as a promising solution to these obstacles.
Our approach introduces a novel foveated rendering approach for Virtual Reality, that leverages the sharp, detailed output of neural point rendering for the foveal region, fused with a smooth rendering of 3DGS for the peripheral vision.
arXiv Detail & Related papers (2024-10-23T14:54:48Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - Reconstructive Latent-Space Neural Radiance Fields for Efficient 3D
Scene Representations [34.836151514152746]
In this work, we investigate combining an autoencoder with a NeRF, in which latent features are rendered and then convolutionally decoded.
The resulting latent-space NeRF can produce novel views with higher quality than standard colour-space NeRFs.
We can control the tradeoff between efficiency and image quality by shrinking the AE architecture, achieving over 13 times faster rendering with only a small drop in performance.
arXiv Detail & Related papers (2023-10-27T03:52:08Z) - NeRRF: 3D Reconstruction and View Synthesis for Transparent and Specular
Objects with Neural Refractive-Reflective Fields [23.099784003061618]
We introduce the refractive-reflective field to Neural radiance fields (NeRF)
NeRF uses straight rays and fails to deal with complicated light path changes caused by refraction and reflection.
We propose a virtual cone supersampling technique to achieve efficient and effective anti-aliasing.
arXiv Detail & Related papers (2023-09-22T17:59:12Z) - Dynamic Mesh-Aware Radiance Fields [75.59025151369308]
This paper designs a two-way coupling between mesh and NeRF during rendering and simulation.
We show that a hybrid system approach outperforms alternatives in visual realism for mesh insertion.
arXiv Detail & Related papers (2023-09-08T20:18:18Z) - ENVIDR: Implicit Differentiable Renderer with Neural Environment
Lighting [9.145875902703345]
We introduce ENVIDR, a rendering and modeling framework for high-quality rendering and reconstruction of surfaces with challenging specular reflections.
We first propose a novel neural with decomposed rendering to learn the interaction between surface and environment lighting.
We then propose an SDF-based neural surface model that leverages this learned neural to represent general scenes.
arXiv Detail & Related papers (2023-03-23T04:12:07Z) - Aleth-NeRF: Low-light Condition View Synthesis with Concealing Fields [65.96818069005145]
Vanilla NeRF is viewer-centred simplifies the rendering process only as light emission from 3D locations in the viewing direction.
Inspired by the emission theory of ancient Greeks, we make slight modifications on vanilla NeRF to train on multiple views of low-light scenes.
We introduce a surrogate concept, Concealing Fields, that reduces the transport of light during the volume rendering stage.
arXiv Detail & Related papers (2023-03-10T09:28:09Z) - 3D Scene Creation and Rendering via Rough Meshes: A Lighting Transfer Avenue [49.62477229140788]
This paper studies how to flexibly integrate reconstructed 3D models into practical 3D modeling pipelines such as 3D scene creation and rendering.
We propose a lighting transfer network (LighTNet) to bridge NFR and PBR, such that they can benefit from each other.
arXiv Detail & Related papers (2022-11-27T13:31:00Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.