Neural Lens Modeling
- URL: http://arxiv.org/abs/2304.04848v1
- Date: Mon, 10 Apr 2023 20:09:17 GMT
- Title: Neural Lens Modeling
- Authors: Wenqi Xian and Alja\v{z} Bo\v{z}i\v{c} and Noah Snavely and Christoph
Lassner
- Abstract summary: NeuroLens is a neural lens model for distortion and vignetting that can be used for point projection and ray casting.
It can be used to perform pre-capture calibration using classical calibration targets, and can later be used to perform calibration or refinement during 3D reconstruction.
The model generalizes across many lens types and is trivial to integrate into existing 3D reconstruction and rendering systems.
- Score: 50.57409162437732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent methods for 3D reconstruction and rendering increasingly benefit from
end-to-end optimization of the entire image formation process. However, this
approach is currently limited: effects of the optical hardware stack and in
particular lenses are hard to model in a unified way. This limits the quality
that can be achieved for camera calibration and the fidelity of the results of
3D reconstruction. In this paper, we propose NeuroLens, a neural lens model for
distortion and vignetting that can be used for point projection and ray casting
and can be optimized through both operations. This means that it can
(optionally) be used to perform pre-capture calibration using classical
calibration targets, and can later be used to perform calibration or refinement
during 3D reconstruction, e.g., while optimizing a radiance field. To evaluate
the performance of our proposed model, we create a comprehensive dataset
assembled from the Lensfun database with a multitude of lenses. Using this and
other real-world datasets, we show that the quality of our proposed lens model
outperforms standard packages as well as recent approaches while being much
easier to use and extend. The model generalizes across many lens types and is
trivial to integrate into existing 3D reconstruction and rendering systems.
Related papers
- Self-Calibrating Gaussian Splatting for Large Field of View Reconstruction [30.529707438964596]
We present a self-calibrating framework that jointly optimize camera parameters, lens distortion and 3D Gaussian representations.
Our technique enables high-quality scene reconstruction from Large field-of-view (FOV) imagery taken with wide-angle lenses, allowing the scene to be modeled from a smaller number of images.
arXiv Detail & Related papers (2025-02-13T18:15:10Z) - Towards Unified Structured Light Optimization [2.4823372746556442]
Structured light (SL) 3D reconstruction captures the precise surface shape of objects.
We present a unified framework for SL optimization, adaptable to diverse lighting conditions, object types, and different types of SL.
Key contributions include a novel global matching method for projectors, enabling precise projector-camera alignment with just one projected image.
arXiv Detail & Related papers (2025-01-24T17:29:17Z) - Towards End-to-End Neuromorphic Voxel-based 3D Object Reconstruction Without Physical Priors [0.0]
We propose an end-to-end method for dense voxel 3D reconstruction using neuromorphic cameras.
Our method achieves a 54.6% improvement in reconstruction accuracy compared to the baseline method.
arXiv Detail & Related papers (2025-01-01T06:07:03Z) - ConvMesh: Reimagining Mesh Quality Through Convex Optimization [55.2480439325792]
This research introduces a convex optimization programming called disciplined convex programming to enhance existing meshes.
By focusing on a sparse set of point clouds from both the original and target meshes, this method demonstrates significant improvements in mesh quality with minimal data requirements.
arXiv Detail & Related papers (2024-12-11T15:48:25Z) - GGRt: Towards Pose-free Generalizable 3D Gaussian Splatting in Real-time [112.32349668385635]
GGRt is a novel approach to generalizable novel view synthesis that alleviates the need for real camera poses.
As the first pose-free generalizable 3D-GS framework, GGRt achieves inference at $ge$ 5 FPS and real-time rendering at $ge$ 100 FPS.
arXiv Detail & Related papers (2024-03-15T09:47:35Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Unrolled Primal-Dual Networks for Lensless Cameras [0.45880283710344055]
We show that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature.
This improvement stems from our finding that embedding learnable forward and adjoint models in a learned primal-dual optimization framework can even improve the quality of reconstructed images.
arXiv Detail & Related papers (2022-03-08T19:21:39Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.