Neural Volume Super-Resolution
- URL: http://arxiv.org/abs/2212.04666v2
- Date: Fri, 5 May 2023 20:32:30 GMT
- Title: Neural Volume Super-Resolution
- Authors: Yuval Bahat, Yuxuan Zhang, Hendrik Sommerhoff, Andreas Kolb and Felix
Heide
- Abstract summary: We propose a neural super-resolution network that operates directly on the volumetric representation of the scene.
To realize our method, we devise a novel 3D representation that hinges on multiple 2D feature planes.
We validate the proposed method by super-resolving multi-view consistent views on a diverse set of unseen 3D scenes.
- Score: 49.879789224455436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural volumetric representations have become a widely adopted model for
radiance fields in 3D scenes. These representations are fully implicit or
hybrid function approximators of the instantaneous volumetric radiance in a
scene, which are typically learned from multi-view captures of the scene. We
investigate the new task of neural volume super-resolution - rendering
high-resolution views corresponding to a scene captured at low resolution. To
this end, we propose a neural super-resolution network that operates directly
on the volumetric representation of the scene. This approach allows us to
exploit an advantage of operating in the volumetric domain, namely the ability
to guarantee consistent super-resolution across different viewing directions.
To realize our method, we devise a novel 3D representation that hinges on
multiple 2D feature planes. This allows us to super-resolve the 3D scene
representation by applying 2D convolutional networks on the 2D feature planes.
We validate the proposed method by super-resolving multi-view consistent views
on a diverse set of unseen 3D scenes, confirming qualitative and quantitatively
favorable quality over existing approaches.
Related papers
- From Diffusion to Resolution: Leveraging 2D Diffusion Models for 3D Super-Resolution Task [19.56372155146739]
We present a novel approach that leverages the 2D diffusion model and lateral continuity within the volume to enhance 3D volume electron microscopy (vEM) super-resolution.
Our results on two publicly available focused ion beam scanning electron microscopy (FIB-SEM) datasets demonstrate the robustness and practical applicability of our framework.
arXiv Detail & Related papers (2024-11-25T09:12:55Z) - Reference-free Axial Super-resolution of 3D Microscopy Images using Implicit Neural Representation with a 2D Diffusion Prior [4.1326413814647545]
Training a learning-based 3D super-resolution model requires ground truth isotropic volumes and suffers from the curse of dimensionality.
Existing methods utilize 2D neural networks to reconstruct each axial slice, eventually piecing together the entire volume.
We present a reconstruction framework based on implicit neural representation (INR), which allows 3D coherency even when optimized by independent axial slices.
arXiv Detail & Related papers (2024-08-16T09:14:12Z) - Volumetric Environment Representation for Vision-Language Navigation [66.04379819772764]
Vision-language navigation (VLN) requires an agent to navigate through a 3D environment based on visual observations and natural language instructions.
We introduce a Volumetric Environment Representation (VER), which voxelizes the physical world into structured 3D cells.
VER predicts 3D occupancy, 3D room layout, and 3D bounding boxes jointly.
arXiv Detail & Related papers (2024-03-21T06:14:46Z) - What You See is What You GAN: Rendering Every Pixel for High-Fidelity
Geometry in 3D GANs [82.3936309001633]
3D-aware Generative Adversarial Networks (GANs) have shown remarkable progress in learning to generate multi-view-consistent images and 3D geometries.
Yet, the significant memory and computational costs of dense sampling in volume rendering have forced 3D GANs to adopt patch-based training or employ low-resolution rendering with post-processing 2D super resolution.
We propose techniques to scale neural volume rendering to the much higher resolution of native 2D images, thereby resolving fine-grained 3D geometry with unprecedented detail.
arXiv Detail & Related papers (2024-01-04T18:50:38Z) - Multi-Plane Neural Radiance Fields for Novel View Synthesis [5.478764356647437]
Novel view synthesis is a long-standing problem that revolves around rendering frames of scenes from novel camera viewpoints.
In this work, we examine the performance, generalization, and efficiency of single-view multi-plane neural radiance fields.
We propose a new multiplane NeRF architecture that accepts multiple views to improve the synthesis results and expand the viewing range.
arXiv Detail & Related papers (2023-03-03T06:32:55Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - Neural Volumetric Object Selection [126.04480613166194]
We introduce an approach for selecting objects in neural volumetric 3D representations, such as multi-plane images (MPI) and neural radiance fields (NeRF)
Our approach takes a set of foreground and background 2D user scribbles in one view and automatically estimates a 3D segmentation of the desired object, which can be rendered into novel views.
arXiv Detail & Related papers (2022-05-30T08:55:20Z) - 3DVNet: Multi-View Depth Prediction and Volumetric Refinement [68.68537312256144]
3DVNet is a novel multi-view stereo (MVS) depth-prediction method.
Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions.
We show that our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics.
arXiv Detail & Related papers (2021-12-01T00:52:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.