HyperNeRF: A Higher-Dimensional Representation for Topologically Varying
Neural Radiance Fields
- URL: http://arxiv.org/abs/2106.13228v1
- Date: Thu, 24 Jun 2021 17:59:03 GMT
- Title: HyperNeRF: A Higher-Dimensional Representation for Topologically Varying
Neural Radiance Fields
- Authors: Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T. Barron, Sofien
Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, Steven M. Seitz
- Abstract summary: A common approach to reconstruct non-rigid scenes is through the use of a learned deformation field mapping from coordinates in each input image into a canonical template coordinate space.
We address this limitation by lifting NeRFs into a higher dimensional space, and by representing the 5D radiance field corresponding to each individual input image as a slice through this "hyper-space"
We show that our method, which we dub HyperNeRF, outperforms existing methods on both tasks by significant margins.
- Score: 45.8031461350167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRF) are able to reconstruct scenes with
unprecedented fidelity, and various recent works have extended NeRF to handle
dynamic scenes. A common approach to reconstruct such non-rigid scenes is
through the use of a learned deformation field mapping from coordinates in each
input image into a canonical template coordinate space. However, these
deformation-based approaches struggle to model changes in topology, as
topological changes require a discontinuity in the deformation field, but these
deformation fields are necessarily continuous. We address this limitation by
lifting NeRFs into a higher dimensional space, and by representing the 5D
radiance field corresponding to each individual input image as a slice through
this "hyper-space". Our method is inspired by level set methods, which model
the evolution of surfaces as slices through a higher dimensional surface. We
evaluate our method on two tasks: (i) interpolating smoothly between "moments",
i.e., configurations of the scene, seen in the input images while maintaining
visual plausibility, and (ii) novel-view synthesis at fixed moments. We show
that our method, which we dub HyperNeRF, outperforms existing methods on both
tasks by significant margins. Compared to Nerfies, HyperNeRF reduces average
error rates by 8.6% for interpolation and 8.8% for novel-view synthesis, as
measured by LPIPS.
Related papers
- DreamHOI: Subject-Driven Generation of 3D Human-Object Interactions with Diffusion Priors [4.697267141773321]
We present DreamHOI, a novel method for zero-shot synthesis of human-object interactions (HOIs)
We leverage text-to-image diffusion models trained on billions of image-caption pairs to generate realistic HOIs.
We validate our approach through extensive experiments, demonstrating its effectiveness in generating realistic HOIs.
arXiv Detail & Related papers (2024-09-12T17:59:49Z) - InterNeRF: Scaling Radiance Fields via Parameter Interpolation [36.014610797521605]
We propose InterNeRF, a novel architecture for rendering a target view using a subset of the model's parameters.
We demonstrate significant improvements in multi-room scenes while remaining competitive on standard benchmarks.
arXiv Detail & Related papers (2024-06-17T16:55:22Z) - Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - Forward Flow for Novel View Synthesis of Dynamic Scenes [97.97012116793964]
We propose a neural radiance field (NeRF) approach for novel view synthesis of dynamic scenes using forward warping.
Our method outperforms existing methods in both novel view rendering and motion modeling.
arXiv Detail & Related papers (2023-09-29T16:51:06Z) - RecRecNet: Rectangling Rectified Wide-Angle Images by Thin-Plate Spline
Model and DoF-based Curriculum Learning [62.86400614141706]
We propose a new learning model, i.e., Rectangling Rectification Network (RecRecNet)
Our model can flexibly warp the source structure to the target domain and achieves an end-to-end unsupervised deformation.
Experiments show the superiority of our solution over the compared methods on both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2023-01-04T15:12:57Z) - Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields [36.09829614806658]
We propose L2G-NeRF, a Local-to-Global registration method for Neural Radiance Fields.
Pixel-wise local alignment is learned in an unsupervised way via a deep network.
Our method outperforms the current state-of-the-art in terms of high-fidelity reconstruction and resolving large camera pose misalignment.
arXiv Detail & Related papers (2022-11-21T14:43:16Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - DeVRF: Fast Deformable Voxel Radiance Fields for Dynamic Scenes [27.37830742693236]
We present DeVRF, a novel representation to accelerate learning dynamic radiance fields.
Experiments demonstrate that DeVRF achieves two orders of magnitude speedup with on-par high-fidelity results.
arXiv Detail & Related papers (2022-05-31T12:13:54Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.