Explicifying Neural Implicit Fields for Efficient Dynamic Human Avatar
Modeling via a Neural Explicit Surface
- URL: http://arxiv.org/abs/2308.05112v1
- Date: Mon, 7 Aug 2023 07:17:18 GMT
- Title: Explicifying Neural Implicit Fields for Efficient Dynamic Human Avatar
Modeling via a Neural Explicit Surface
- Authors: Ruiqi Zhang and Jie Chen and Qiang Wang
- Abstract summary: Implicit neural fields have advantages over traditional explicit representations in modeling dynamic 3D content.
The paper proposes utilizing Neural Explicit Surface (NES) to explicitly represent implicit neural fields.
NES performs similarly to previous 3D approaches, with greatly improved rendering speed and reduced memory cost.
- Score: 10.604108229704336
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a technique for efficiently modeling dynamic humans by
explicifying the implicit neural fields via a Neural Explicit Surface (NES).
Implicit neural fields have advantages over traditional explicit
representations in modeling dynamic 3D content from sparse observations and
effectively representing complex geometries and appearances. Implicit neural
fields defined in 3D space, however, are expensive to render due to the need
for dense sampling during volumetric rendering. Moreover, their memory
efficiency can be further optimized when modeling sparse 3D space. To overcome
these issues, the paper proposes utilizing Neural Explicit Surface (NES) to
explicitly represent implicit neural fields, facilitating memory and
computational efficiency. To achieve this, the paper creates a fully
differentiable conversion between the implicit neural fields and the explicit
rendering interface of NES, leveraging the strengths of both implicit and
explicit approaches. This conversion enables effective training of the hybrid
representation using implicit methods and efficient rendering by integrating
the explicit rendering interface with a newly proposed rasterization-based
neural renderer that only incurs a texture color query once for the initial ray
interaction with the explicit surface, resulting in improved inference
efficiency. NES describes dynamic human geometries with pose-dependent neural
implicit surface deformation fields and their dynamic neural textures both in
2D space, which is a more memory-efficient alternative to traditional 3D
methods, reducing redundancy and computational load. The comprehensive
experiments show that NES performs similarly to previous 3D approaches, with
greatly improved rendering speed and reduced memory cost.
Related papers
- Magnituder Layers for Implicit Neural Representations in 3D [23.135779936528333]
We introduce a novel neural network layer called the "magnituder"
By integrating magnituders into standard feed-forward layer stacks, we achieve improved inference speed and adaptability.
Our approach enables a zero-shot performance boost in trained implicit neural representation models.
arXiv Detail & Related papers (2024-10-13T08:06:41Z) - DreamHOI: Subject-Driven Generation of 3D Human-Object Interactions with Diffusion Priors [4.697267141773321]
We present DreamHOI, a novel method for zero-shot synthesis of human-object interactions (HOIs)
We leverage text-to-image diffusion models trained on billions of image-caption pairs to generate realistic HOIs.
We validate our approach through extensive experiments, demonstrating its effectiveness in generating realistic HOIs.
arXiv Detail & Related papers (2024-09-12T17:59:49Z) - Dynamic 3D Gaussian Fields for Urban Areas [60.64840836584623]
We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas.
We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas.
arXiv Detail & Related papers (2024-06-05T12:07:39Z) - MeshXL: Neural Coordinate Field for Generative 3D Foundation Models [51.1972329762843]
We present a family of generative pre-trained auto-regressive models, which addresses the process of 3D mesh generation with modern large language model approaches.
MeshXL is able to generate high-quality 3D meshes, and can also serve as foundation models for various down-stream applications.
arXiv Detail & Related papers (2024-05-31T14:35:35Z) - A Refined 3D Gaussian Representation for High-Quality Dynamic Scene Reconstruction [2.022451212187598]
In recent years, Neural Radiance Fields (NeRF) has revolutionized three-dimensional (3D) reconstruction with its implicit representation.
3D Gaussian Splatting (3D-GS) has departed from the implicit representation of neural networks and instead directly represents scenes as point clouds with Gaussian-shaped distributions.
This paper purposes a refined 3D Gaussian representation for high-quality dynamic scene reconstruction.
Experimental results demonstrate that our method surpasses existing approaches in rendering quality and speed, while significantly reducing the memory usage associated with 3D-GS.
arXiv Detail & Related papers (2024-05-28T07:12:22Z) - N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Multi-View Mesh Reconstruction with Neural Deferred Shading [0.8514420632209809]
State-of-the-art methods use both neural surface representations and neural shading.
We represent surfaces as triangle meshes and build a differentiable rendering pipeline around triangle rendering and neural shading.
We evaluate our runtime on a public 3D reconstruction dataset and show that it can match the reconstruction accuracy of traditional baselines while surpassing them in optimization.
arXiv Detail & Related papers (2022-12-08T16:29:46Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.