DoubleField: Bridging the Neural Surface and Radiance Fields for
High-fidelity Human Rendering
- URL: http://arxiv.org/abs/2106.03798v2
- Date: Tue, 8 Jun 2021 13:22:11 GMT
- Title: DoubleField: Bridging the Neural Surface and Radiance Fields for
High-fidelity Human Rendering
- Authors: Ruizhi Shao, Hongwen Zhang, He Zhang, Yanpei Cao, Tao Yu, Yebin Liu
- Abstract summary: DoubleField is a representation combining the merits of both surface field and radiance field for high-fidelity human rendering.
DoubleField has a continuous but disentangled learning space for geometry and appearance modeling, which supports fast training, inference, and finetuning.
The efficacy of DoubleField is validated by the quantitative evaluations on several datasets and the qualitative results in a real-world sparse multi-view system.
- Score: 43.12198563879908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce DoubleField, a novel representation combining the merits of both
surface field and radiance field for high-fidelity human rendering. Within
DoubleField, the surface field and radiance field are associated together by a
shared feature embedding and a surface-guided sampling strategy. In this way,
DoubleField has a continuous but disentangled learning space for geometry and
appearance modeling, which supports fast training, inference, and finetuning.
To achieve high-fidelity free-viewpoint rendering, DoubleField is further
augmented to leverage ultra-high-resolution inputs, where a view-to-view
transformer and a transfer learning scheme are introduced for more efficient
learning and finetuning from sparse-view inputs at original resolutions. The
efficacy of DoubleField is validated by the quantitative evaluations on several
datasets and the qualitative results in a real-world sparse multi-view system,
showing its superior capability for photo-realistic free-viewpoint human
rendering. For code and demo video, please refer to our project page:
http://www.liuyebin.com/dbfield/dbfield.html.
Related papers
- Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - Sur2f: A Hybrid Representation for High-Quality and Efficient Surface
Reconstruction from Multi-view Images [41.81291587750352]
Multi-view surface reconstruction is an ill-posed, inverse problem in 3D vision research.
Most of the existing methods rely either on explicit meshes, or on implicit field functions, using volume rendering of the fields for reconstruction.
We propose a new hybrid representation, termed Sur2f, aiming to better benefit from both representations in a complementary manner.
arXiv Detail & Related papers (2024-01-08T07:22:59Z) - ColNeRF: Collaboration for Generalizable Sparse Input Neural Radiance
Field [89.54363625953044]
Collaborative Neural Radiance Fields (ColNeRF) is designed to work with sparse input.
ColNeRF is capable of capturing richer and more generalized scene representation.
Our approach exhibits superiority in fine-tuning towards adapting to new scenes.
arXiv Detail & Related papers (2023-12-14T16:26:46Z) - Feature 3DGS: Supercharging 3D Gaussian Splatting to Enable Distilled Feature Fields [54.482261428543985]
Methods that use Neural Radiance fields are versatile for traditional tasks such as novel view synthesis.
3D Gaussian splatting has shown state-of-the-art performance on real-time radiance field rendering.
We propose architectural and training changes to efficiently avert this problem.
arXiv Detail & Related papers (2023-12-06T00:46:30Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - ImmersiveNeRF: Hybrid Radiance Fields for Unbounded Immersive Light
Field Reconstruction [32.722973192853296]
This paper proposes a hybrid radiance field representation for immersive light field reconstruction.
We represent the foreground and background as two separate radiance fields with two different spatial mapping strategies.
We also contribute a novel immersive light field dataset, named THUImmersive, with the potential to achieve much larger space 6DoF immersive rendering effects.
arXiv Detail & Related papers (2023-09-04T05:57:16Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Light Field Neural Rendering [47.7586443731997]
Methods based on geometric reconstruction need only sparse views, but cannot accurately model non-Lambertian effects.
We introduce a model that combines the strengths and mitigates the limitations of these two directions.
Our model outperforms the state-of-the-art on multiple forward-facing and 360deg datasets.
arXiv Detail & Related papers (2021-12-17T18:58:05Z) - Generative Occupancy Fields for 3D Surface-Aware Image Synthesis [123.11969582055382]
Generative Occupancy Fields (GOF) is a novel model based on generative radiance fields.
GOF can synthesize high-quality images with 3D consistency and simultaneously learn compact and smooth object surfaces.
arXiv Detail & Related papers (2021-11-01T14:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.