MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient
Neural Field Rendering on Mobile Architectures
- URL: http://arxiv.org/abs/2208.00277v5
- Date: Tue, 30 May 2023 03:49:00 GMT
- Title: MobileNeRF: Exploiting the Polygon Rasterization Pipeline for Efficient
Neural Field Rendering on Mobile Architectures
- Authors: Zhiqin Chen, Thomas Funkhouser, Peter Hedman, Andrea Tagliasacchi
- Abstract summary: Radiance Fields (NeRFs) have demonstrated amazing ability to synthesize images of 3D scenes from novel views.
They rely upon specialized rendering algorithms based on ray marching that are mismatched to the capabilities of widely deployed graphics hardware.
This paper introduces a new NeRF representation based on textured polygons that can synthesize novel images efficiently with standard pipelines.
- Score: 22.557877400262402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Fields (NeRFs) have demonstrated amazing ability to
synthesize images of 3D scenes from novel views. However, they rely upon
specialized volumetric rendering algorithms based on ray marching that are
mismatched to the capabilities of widely deployed graphics hardware. This paper
introduces a new NeRF representation based on textured polygons that can
synthesize novel images efficiently with standard rendering pipelines. The NeRF
is represented as a set of polygons with textures representing binary opacities
and feature vectors. Traditional rendering of the polygons with a z-buffer
yields an image with features at every pixel, which are interpreted by a small,
view-dependent MLP running in a fragment shader to produce a final pixel color.
This approach enables NeRFs to be rendered with the traditional polygon
rasterization pipeline, which provides massive pixel-level parallelism,
achieving interactive frame rates on a wide range of compute platforms,
including mobile phones.
Related papers
- Plenoptic PNG: Real-Time Neural Radiance Fields in 150 KB [29.267039546199094]
This paper aims to encode a 3D scene into an extremely compact representation from 2D images.
It enables its transmittance, decoding and rendering in real-time across various platforms.
arXiv Detail & Related papers (2024-09-24T03:06:22Z) - 3D Gaussian Ray Tracing: Fast Tracing of Particle Scenes [50.36933474990516]
This work considers ray tracing the particles, building a bounding volume hierarchy and casting a ray for each pixel using high-performance ray tracing hardware.
To efficiently handle large numbers of semi-transparent particles, we describe a specialized algorithm which encapsulates particles with bounding meshes.
Experiments demonstrate the speed and accuracy of our approach, as well as several applications in computer graphics and vision.
arXiv Detail & Related papers (2024-07-09T17:59:30Z) - OpenNeRF: Open Set 3D Neural Scene Segmentation with Pixel-Wise Features and Rendered Novel Views [90.71215823587875]
We propose OpenNeRF which naturally operates on posed images and directly encodes the VLM features within the NeRF.
Our work shows that using pixel-wise VLM features results in an overall less complex architecture without the need for additional DINO regularization.
For 3D point cloud segmentation on the Replica dataset, OpenNeRF outperforms recent open-vocabulary methods such as LERF and OpenScene by at least +4.9 mIoU.
arXiv Detail & Related papers (2024-04-04T17:59:08Z) - Volumetric Rendering with Baked Quadrature Fields [34.280932843055446]
We propose a novel representation for non-opaque scenes that enables fast inference by utilizing textured polygons.
Our method allows an easy integration with existing graphics frameworks allowing rendering speed of over 100 frames-per-second for a $1920times1080$ image.
arXiv Detail & Related papers (2023-12-02T20:45:18Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering [84.37776381343662]
Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information.
We propose mip voxel grids (Mip-VoG), an explicit multiscale representation for real-time anti-aliasing rendering.
Our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously.
arXiv Detail & Related papers (2023-04-20T04:05:22Z) - Grid-guided Neural Radiance Fields for Large Urban Scenes [146.06368329445857]
Recent approaches propose to geographically divide the scene and adopt multiple sub-NeRFs to model each region individually.
An alternative solution is to use a feature grid representation, which is computationally efficient and can naturally scale to a large scene.
We present a new framework that realizes high-fidelity rendering on large urban scenes while being computationally efficient.
arXiv Detail & Related papers (2023-03-24T13:56:45Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.