Neural Lens Modeling
- URL: http://arxiv.org/abs/2304.04848v1
- Date: Mon, 10 Apr 2023 20:09:17 GMT
- Title: Neural Lens Modeling
- Authors: Wenqi Xian and Alja\v{z} Bo\v{z}i\v{c} and Noah Snavely and Christoph
Lassner
- Abstract summary: NeuroLens is a neural lens model for distortion and vignetting that can be used for point projection and ray casting.
It can be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction.
The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.
- Score: 50.57409162437732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent methods for 3D reconstruction and rendering increasingly benefit from
end-to-end optimization of the entire image formation process. However, this
approach is currently limited: effects of the optical hardware stack and in
particular lenses are hard to model in a unified way. This limits the quality
that can be achieved for camera calibration and the fidelity of the results of
3D reconstruction. In this paper, we propose NeuroLens, a neural lens model for
distortion and vignetting that can be used for point projection and ray casting
and can be optimized through both operations. This means that it can
(optionally) be used to perform pre-capture calibration using classical
calibration targets, and can later be used to perform calibration or refinement
during 3D reconstruction, e.g., while optimizing a radiance field. To evaluate
the performance of our proposed model, we create a comprehensive dataset
assembled from the Lensfun database with a multitude of lenses. Using this and
other real-world datasets, we show that the quality of our proposed lens model
outperforms standard packages as well as recent approaches while being much
easier to use and extend. The model generalizes across many lens types and is
trivial to integrate into existing 3D reconstruction and rendering systems.
Related papers
- PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - GGRt: Towards Pose-free Generalizable 3D Gaussian Splatting in Real-time [112.32349668385635]
GGRt is a novel approach to generalizable novel view synthesis that alleviates the need for real camera poses.
As the first pose-free generalizable 3D-GS framework, GGRt achieves inference at $ge$ 5 FPS and real-time rendering at $ge$ 100 FPS.
arXiv Detail & Related papers (2024-03-15T09:47:35Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - Optical Aberration Correction in Postprocessing using Imaging Simulation [17.331939025195478]
The popularity of mobile photography continues to grow.
Recent cameras have shifted some of these correction tasks from optical design to postprocessing systems.
We propose a practical method for recovering the degradation caused by optical aberrations.
arXiv Detail & Related papers (2023-05-10T03:20:39Z) - Adaptive Joint Optimization for 3D Reconstruction with Differentiable
Rendering [22.2095090385119]
Given an imperfect reconstructed 3D model, most previous methods have focused on the refinement of either geometry, texture, or camera pose.
We propose a novel optimization approach based on differentiable rendering, which integrates the optimization of camera pose, geometry, and texture into a unified framework.
Using differentiable rendering, an image-level adversarial loss is applied to further improve the 3D model, making it more photorealistic.
arXiv Detail & Related papers (2022-08-15T04:32:41Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Unrolled Primal-Dual Networks for Lensless Cameras [0.45880283710344055]
We show that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature.
This improvement stems from our finding that embedding learnable forward and adjoint models in a learned primal-dual optimization framework can even improve the quality of reconstructed images.
arXiv Detail & Related papers (2022-03-08T19:21:39Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - A-NeRF: Surface-free Human 3D Pose Refinement via Neural Rendering [13.219688351773422]
We propose a test-time optimization approach for monocular motion capture that learns a volumetric body model of the user in a self-supervised manner.
Our approach is self-supervised and does not require any additional ground truth labels for appearance, pose, or 3D shape.
We demonstrate that our novel combination of a discriminative pose estimation technique with surface-free analysis-by-synthesis outperforms purely discriminative monocular pose estimation approaches.
arXiv Detail & Related papers (2021-02-11T18:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.