TensoRF: Tensorial Radiance Fields
- URL: http://arxiv.org/abs/2203.09517v1
- Date: Thu, 17 Mar 2022 17:59:59 GMT
- Title: TensoRF: Tensorial Radiance Fields
- Authors: Anpei Chen and Zexiang Xu and Andreas Geiger and Jingyi Yu and Hao Su
- Abstract summary: We present TensoRF, a novel approach to model and reconstruct radiance fields.
We model the radiance field of a scene as a 4D tensor, which represents a 3D voxel grid with per-voxel multi-channel features.
We show that TensoRF with CP decomposition achieves fast reconstruction (30 min) with better rendering quality and even a smaller model size (4 MB) compared to NeRF.
- Score: 74.16791688888081
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present TensoRF, a novel approach to model and reconstruct radiance
fields. Unlike NeRF that purely uses MLPs, we model the radiance field of a
scene as a 4D tensor, which represents a 3D voxel grid with per-voxel
multi-channel features. Our central idea is to factorize the 4D scene tensor
into multiple compact low-rank tensor components. We demonstrate that applying
traditional CP decomposition -- that factorizes tensors into rank-one
components with compact vectors -- in our framework leads to improvements over
vanilla NeRF. To further boost performance, we introduce a novel vector-matrix
(VM) decomposition that relaxes the low-rank constraints for two modes of a
tensor and factorizes tensors into compact vector and matrix factors. Beyond
superior rendering quality, our models with CP and VM decompositions lead to a
significantly lower memory footprint in comparison to previous and concurrent
works that directly optimize per-voxel features. Experimentally, we demonstrate
that TensoRF with CP decomposition achieves fast reconstruction (<30 min) with
better rendering quality and even a smaller model size (<4 MB) compared to
NeRF. Moreover, TensoRF with VM decomposition further boosts rendering quality
and outperforms previous state-of-the-art methods, while reducing the
reconstruction time (<10 min) and retaining a compact model size (<75 MB).
Related papers
- SVDQunat: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models [58.5019443418822]
Diffusion models have been proven highly effective at generating high-quality images.
As these models grow larger, they require significantly more memory and suffer from higher latency.
In this work, we aim to accelerate diffusion models by quantizing their weights and activations to 4 bits.
arXiv Detail & Related papers (2024-11-07T18:59:58Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - VQ-NeRF: Vector Quantization Enhances Implicit Neural Representations [25.88881764546414]
VQ-NeRF is an efficient pipeline for enhancing implicit neural representations via vector quantization.
We present an innovative multi-scale NeRF sampling scheme that concurrently optimize the NeRF model at both compressed and original scales.
We incorporate a semantic loss function to improve the geometric fidelity and semantic coherence of our 3D reconstructions.
arXiv Detail & Related papers (2023-10-23T01:41:38Z) - Strivec: Sparse Tri-Vector Radiance Fields [40.66438698104296]
Strivec is a novel representation that models a 3D scene as a radiance field with sparsely distributed and compactly factorized local tensor feature grids.
We demonstrate that our model can achieve better rendering quality while using significantly fewer parameters than previous methods.
arXiv Detail & Related papers (2023-07-25T03:30:09Z) - Low-rank Tensor Assisted K-space Generative Model for Parallel Imaging
Reconstruction [14.438899814473446]
We present a new idea, low-rank tensor assisted k-space generative model (LR-KGM) for parallel imaging reconstruction.
This means that we transform original prior information into high-dimensional prior information for learning.
Experimental comparisons with the state-of-the-arts demonstrated that the proposed LR-KGM method achieved better performance.
arXiv Detail & Related papers (2022-12-11T13:34:43Z) - D-TensoRF: Tensorial Radiance Fields for Dynamic Scenes [2.587781533364185]
We present D-TensoRF, a synthesisial radiance field for dynamic scenes.
We decompose the grid either into rank-one vector components (CP decomposition) or low-rank matrix components (newly proposed MM decomposition)
We show that D-TensoRF with CP decomposition and MM decomposition both have short training times and significantly low memory footprints.
arXiv Detail & Related papers (2022-12-05T15:57:55Z) - Quaternion Factorization Machines: A Lightweight Solution to Intricate
Feature Interaction Modelling [76.89779231460193]
factorization machine (FM) is capable of automatically learning high-order interactions among features to make predictions without the need for manual feature engineering.
We propose the quaternion factorization machine (QFM) and quaternion neural factorization machine (QNFM) for sparse predictive analytics.
arXiv Detail & Related papers (2021-04-05T00:02:36Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z) - Compressing Recurrent Neural Networks Using Hierarchical Tucker Tensor
Decomposition [39.76939368675827]
Recurrent Neural Networks (RNNs) have been widely used in sequence analysis and modeling.
RNNs typically require very large model sizes when processing high-dimensional data.
We propose to develop compact RNN models using Hierarchical Tucker (HT) decomposition.
arXiv Detail & Related papers (2020-05-09T05:15:20Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.