NAS-NeRF: Generative Neural Architecture Search for Neural Radiance
Fields
- URL: http://arxiv.org/abs/2309.14293v3
- Date: Mon, 11 Dec 2023 16:01:58 GMT
- Title: NAS-NeRF: Generative Neural Architecture Search for Neural Radiance
Fields
- Authors: Saeejith Nair, Yuhao Chen, Mohammad Javad Shafiee, Alexander Wong
- Abstract summary: Neural radiance fields (NeRFs) enable high-quality novel view synthesis, but their high computational complexity limits deployability.
We introduce NAS-NeRF, a generative neural architecture search strategy that generates compact, scene-specialized NeRF architectures.
Our method incorporates constraints on target metrics and budgets to guide the search towards architectures tailored for each scene.
- Score: 75.28756910744447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural radiance fields (NeRFs) enable high-quality novel view synthesis, but
their high computational complexity limits deployability. While existing
neural-based solutions strive for efficiency, they use one-size-fits-all
architectures regardless of scene complexity. The same architecture may be
unnecessarily large for simple scenes but insufficient for complex ones. Thus,
there is a need to dynamically optimize the neural network component of NeRFs
to achieve a balance between computational complexity and specific targets for
synthesis quality. We introduce NAS-NeRF, a generative neural architecture
search strategy that generates compact, scene-specialized NeRF architectures by
balancing architecture complexity and target synthesis quality metrics. Our
method incorporates constraints on target metrics and budgets to guide the
search towards architectures tailored for each scene. Experiments on the
Blender synthetic dataset show the proposed NAS-NeRF can generate architectures
up to 5.74$\times$ smaller, with 4.19$\times$ fewer FLOPs, and 1.93$\times$
faster on a GPU than baseline NeRFs, without suffering a drop in SSIM.
Furthermore, we illustrate that NAS-NeRF can also achieve architectures up to
23$\times$ smaller, with 22$\times$ fewer FLOPs, and 4.7$\times$ faster than
baseline NeRFs with only a 5.3% average SSIM drop. Our source code is also made
publicly available at https://saeejithnair.github.io/NAS-NeRF.
Related papers
- RNC: Efficient RRAM-aware NAS and Compilation for DNNs on Resource-Constrained Edge Devices [0.30458577208819987]
We aim to develop edge-friendly deep neural networks (DNNs) for accelerators based on resistive random-access memory (RRAM)
We propose an edge compilation and resource-constrained RRAM-aware neural architecture search (NAS) framework to search for optimized neural networks meeting specific hardware constraints.
The resulting model from NAS optimized for speed achieved 5x-30x speedup.
arXiv Detail & Related papers (2024-09-27T15:35:36Z) - How Far Can We Compress Instant-NGP-Based NeRF? [45.88543996963832]
We introduce the Context-based NeRF Compression (CNC) framework to provide a storage-friendly NeRF representation.
We exploit hash collision and occupancy grids as strong prior knowledge for better context modeling.
We attain 86.7% and 82.3% storage size reduction against the SOTA NeRF compression method BiRF.
arXiv Detail & Related papers (2024-06-06T14:16:03Z) - DNA Family: Boosting Weight-Sharing NAS with Block-Wise Supervisions [121.05720140641189]
We develop a family of models with the distilling neural architecture (DNA) techniques.
Our proposed DNA models can rate all architecture candidates, as opposed to previous works that can only access a sub- search space using algorithms.
Our models achieve state-of-the-art top-1 accuracy of 78.9% and 83.6% on ImageNet for a mobile convolutional network and a small vision transformer, respectively.
arXiv Detail & Related papers (2024-03-02T22:16:47Z) - SPARF: Large-Scale Learning of 3D Sparse Radiance Fields from Few Input
Images [62.64942825962934]
We present SPARF, a large-scale ShapeNet-based synthetic dataset for novel view synthesis.
We propose a novel pipeline (SuRFNet) that learns to generate sparse voxel radiance fields from only few views.
SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate high-quality sparse voxel radiance fields.
arXiv Detail & Related papers (2022-12-18T14:56:22Z) - SinNeRF: Training Neural Radiance Fields on Complex Scenes from a Single
Image [85.43496313628943]
We present a Single View NeRF (SinNeRF) framework consisting of thoughtfully designed semantic and geometry regularizations.
SinNeRF constructs a semi-supervised learning process, where we introduce and propagate geometry pseudo labels.
Experiments are conducted on complex scene benchmarks, including NeRF synthetic dataset, Local Light Field Fusion dataset, and DTU dataset.
arXiv Detail & Related papers (2022-04-02T19:32:42Z) - D-DARTS: Distributed Differentiable Architecture Search [75.12821786565318]
Differentiable ARchiTecture Search (DARTS) is one of the most trending Neural Architecture Search (NAS) methods.
We propose D-DARTS, a novel solution that addresses this problem by nesting several neural networks at cell-level.
arXiv Detail & Related papers (2021-08-20T09:07:01Z) - Differentiable Neural Architecture Learning for Efficient Neural Network
Design [31.23038136038325]
We introduce a novel emph architecture parameterisation based on scaled sigmoid function.
We then propose a general emphiable Neural Architecture Learning (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
arXiv Detail & Related papers (2021-03-03T02:03:08Z) - Learning N:M Fine-grained Structured Sparse Neural Networks From Scratch [75.69506249886622]
Sparsity in Deep Neural Networks (DNNs) has been widely studied to compress and accelerate the models on resource-constrained environments.
In this paper, we are the first to study training from scratch an N:M fine-grained structured sparse network.
arXiv Detail & Related papers (2021-02-08T05:55:47Z) - A Power-Efficient Binary-Weight Spiking Neural Network Architecture for
Real-Time Object Classification [1.5291703721641183]
We propose a binary-weight spiking neural network (BW-SNN) hardware architecture for low-power real-time object classification on edge platforms.
This design stores a full neural network on-chip, and hence requires no off-chip bandwidth.
arXiv Detail & Related papers (2020-03-12T11:25:00Z) - Deep Learning in Memristive Nanowire Networks [0.0]
A new hardware architecture, dubbed the MN3 (Memristive Nanowire Neural Network), was recently described as an efficient architecture for simulating very wide, sparse neural network layers.
We show that the MN3 is capable of performing composition, gradient propagation, and weight updates, which together allow it to function as a deep neural network.
arXiv Detail & Related papers (2020-03-03T20:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.