MomentsNeRF: Leveraging Orthogonal Moments for Few-Shot Neural Rendering
- URL: http://arxiv.org/abs/2407.02668v1
- Date: Tue, 2 Jul 2024 21:02:48 GMT
- Title: MomentsNeRF: Leveraging Orthogonal Moments for Few-Shot Neural Rendering
- Authors: Ahmad AlMughrabi, Ricardo Marques, Petia Radeva,
- Abstract summary: We propose MomentsNeRF, a novel framework for one- and few-shot neural rendering.
Our architecture offers a new transfer learning method to train on multi-scenes.
Our approach is the first to successfully harness features extracted from Gabor and Zernike moments.
- Score: 4.6786468967610055
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose MomentsNeRF, a novel framework for one- and few-shot neural rendering that predicts a neural representation of a 3D scene using Orthogonal Moments. Our architecture offers a new transfer learning method to train on multi-scenes and incorporate a per-scene optimization using one or a few images at test time. Our approach is the first to successfully harness features extracted from Gabor and Zernike moments, seamlessly integrating them into the NeRF architecture. We show that MomentsNeRF performs better in synthesizing images with complex textures and shapes, achieving a significant noise reduction, artifact elimination, and completing the missing parts compared to the recent one- and few-shot neural rendering frameworks. Extensive experiments on the DTU and Shapenet datasets show that MomentsNeRF improves the state-of-the-art by {3.39\;dB\;PSNR}, 11.1% SSIM, 17.9% LPIPS, and 8.3% DISTS metrics. Moreover, it outperforms state-of-the-art performance for both novel view synthesis and single-image 3D view reconstruction. The source code is accessible at: https://amughrabi.github.io/momentsnerf/.
Related papers
- Splatfacto-W: A Nerfstudio Implementation of Gaussian Splatting for Unconstrained Photo Collections [25.154665328053333]
We introduce Splatfacto-W, an in-trivial approach that integrates per-Gaussian neural color features and per-image appearance embeddings into an rendering process.
Our method improves the Peak Signal-to-Noise Ratio (PSNR) by an average of 5.3 dB compared to 3DGS, enhances training speed by 150 times compared to NeRF-based methods, and achieves a similar rendering speed to 3DGS.
arXiv Detail & Related papers (2024-07-17T04:02:54Z) - Splatter Image: Ultra-Fast Single-View 3D Reconstruction [67.96212093828179]
Splatter Image is based on Gaussian Splatting, which allows fast and high-quality reconstruction of 3D scenes from multiple images.
We learn a neural network that, at test time, performs reconstruction in a feed-forward manner, at 38 FPS.
On several synthetic, real, multi-category and large-scale benchmark datasets, we achieve better results in terms of PSNR, LPIPS, and other metrics while training and evaluating much faster than prior works.
arXiv Detail & Related papers (2023-12-20T16:14:58Z) - S3IM: Stochastic Structural SIMilarity and Its Unreasonable
Effectiveness for Neural Fields [46.9880016170926]
We show that Structural SIMilarity (S3IM) loss processes multiple data points as a whole set instead of multiplexing multiple inputs independently.
Our experiments demonstrate the unreasonable effectiveness of S3IM in improving NeRF and neural surface representation for nearly free.
arXiv Detail & Related papers (2023-08-14T09:45:28Z) - Improving Neural Radiance Fields with Depth-aware Optimization for Novel
View Synthesis [12.3338393483795]
We propose SfMNeRF, a method to better synthesize novel views as well as reconstruct the 3D-scene geometry.
SfMNeRF employs the epipolar, photometric consistency, depth smoothness, and position-of-matches constraints to explicitly reconstruct the 3D-scene structure.
Experiments on two public datasets demonstrate that SfMNeRF surpasses state-of-the-art approaches.
arXiv Detail & Related papers (2023-04-11T13:37:17Z) - View Synthesis with Sculpted Neural Points [64.40344086212279]
Implicit neural representations have achieved impressive visual quality but have drawbacks in computational efficiency.
We propose a new approach that performs view synthesis using point clouds.
It is the first point-based method to achieve better visual quality than NeRF while being more than 100x faster in rendering speed.
arXiv Detail & Related papers (2022-05-12T03:54:35Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.