RealLiFe: Real-Time Light Field Reconstruction via Hierarchical Sparse
Gradient Descent
- URL: http://arxiv.org/abs/2307.03017v3
- Date: Mon, 27 Nov 2023 11:38:39 GMT
- Title: RealLiFe: Real-Time Light Field Reconstruction via Hierarchical Sparse
Gradient Descent
- Authors: Yijie Deng, Lei Han, Tianpeng Lin, Lin Li, Jinzhi Zhang, and Lu Fang
- Abstract summary: EffLiFe is a novel light field optimization method that produces high-quality light fields from sparse view images in real time.
Our method achieves comparable visual quality while being 100x faster on average than state-of-the-art offline methods.
- Score: 23.4659443904092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rise of Extended Reality (XR) technology, there is a growing need
for real-time light field generation from sparse view inputs. Existing methods
can be classified into offline techniques, which can generate high-quality
novel views but at the cost of long inference/training time, and online
methods, which either lack generalizability or produce unsatisfactory results.
However, we have observed that the intrinsic sparse manifold of Multi-plane
Images (MPI) enables a significant acceleration of light field generation while
maintaining rendering quality. Based on this insight, we introduce EffLiFe, a
novel light field optimization method, which leverages the proposed
Hierarchical Sparse Gradient Descent (HSGD) to produce high-quality light
fields from sparse view images in real time. Technically, the coarse MPI of a
scene is first generated using a 3D CNN, and it is further sparsely optimized
by focusing only on important MPI gradients in a few iterations. Nevertheless,
relying solely on optimization can lead to artifacts at occlusion boundaries.
Therefore, we propose an occlusion-aware iterative refinement module that
removes visual artifacts in occluded regions by iteratively filtering the
input. Extensive experiments demonstrate that our method achieves comparable
visual quality while being 100x faster on average than state-of-the-art offline
methods and delivering better performance (about 2 dB higher in PSNR) compared
to other online approaches.
Related papers
- RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS [47.47003067842151]
We present RadSplat, a lightweight method for robust real-time rendering of complex scenes.
First, we use radiance fields as a prior and supervision signal for optimizing point-based scene representations, leading to improved quality and more robust optimization.
Next, we develop a novel pruning technique reducing the overall point count while maintaining high quality, leading to smaller and more compact scene representations with faster inference speeds.
arXiv Detail & Related papers (2024-03-20T17:59:55Z) - PASTA: Towards Flexible and Efficient HDR Imaging Via Progressively Aggregated Spatio-Temporal Alignment [91.38256332633544]
PASTA is a Progressively Aggregated Spatio-Temporal Alignment framework for HDR deghosting.
Our approach achieves effectiveness and efficiency by harnessing hierarchical representation during feature distanglement.
Experimental results showcase PASTA's superiority over current SOTA methods in both visual quality and performance metrics.
arXiv Detail & Related papers (2024-03-15T15:05:29Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for
Enhanced Indoor View Synthesis [51.49008959209671]
We introduce VoxNeRF, a novel approach that leverages volumetric representations to enhance the quality and efficiency of indoor view synthesis.
We employ multi-resolution hash grids to adaptively capture spatial features, effectively managing occlusions and the intricate geometry of indoor scenes.
We validate our approach against three public indoor datasets and demonstrate that VoxNeRF outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - HiFA: High-fidelity Text-to-3D Generation with Advanced Diffusion
Guidance [19.252300247300145]
This work proposes holistic sampling and smoothing approaches to achieve high-quality text-to-3D generation.
We compute denoising scores in the text-to-image diffusion model's latent and image spaces.
To generate high-quality renderings in a single-stage optimization, we propose regularization for the variance of z-coordinates along NeRF rays.
arXiv Detail & Related papers (2023-05-30T05:56:58Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - VGOS: Voxel Grid Optimization for View Synthesis from Sparse Inputs [9.374561178958404]
VGOS is an approach for fast (3-5 minutes) radiance field reconstruction from sparse inputs (3-10 views)
We introduce an incremental voxel training strategy, which prevents overfitting by suppressing the optimization of peripheral voxels.
Experiments demonstrate that VGOS achieves state-of-the-art performance for sparse inputs with super-fast convergence.
arXiv Detail & Related papers (2023-04-26T08:52:55Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Ultra-High-Definition Low-Light Image Enhancement: A Benchmark and
Transformer-Based Method [51.30748775681917]
We consider the task of low-light image enhancement (LLIE) and introduce a large-scale database consisting of images at 4K and 8K resolution.
We conduct systematic benchmarking studies and provide a comparison of current LLIE algorithms.
As a second contribution, we introduce LLFormer, a transformer-based low-light enhancement method.
arXiv Detail & Related papers (2022-12-22T09:05:07Z) - PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene
Reconstruction from Blurry Images [75.87721926918874]
We present Progressively Deblurring Radiance Field (PDRF)
PDRF is a novel approach to efficiently reconstruct high quality radiance fields from blurry images.
We show that PDRF is 15X faster than previous State-of-The-Art scene reconstruction methods.
arXiv Detail & Related papers (2022-08-17T03:42:29Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Direct Voxel Grid Optimization: Super-fast Convergence for Radiance
Fields Reconstruction [42.3230709881297]
We present a super-fast convergence approach to reconstructing the per-scene radiance field from a set of images.
Our approach achieves NeRF-comparable quality and converges rapidly from scratch in less than 15 minutes with a single GPU.
arXiv Detail & Related papers (2021-11-22T14:02:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.