Large Scale Photometric Bundle Adjustment
- URL: http://arxiv.org/abs/2008.11762v2
- Date: Thu, 10 Sep 2020 20:31:47 GMT
- Title: Large Scale Photometric Bundle Adjustment
- Authors: Oliver J. Woodford, Edward Rosten
- Abstract summary: offline 3-d reconstruction from internet images has not yet benefited from a joint, photometric optimization over dense geometry and camera parameters.
This work presents a framework for jointly optimizing millions of scene points and hundreds of camera poses and intrinsics.
The improvement in metric reconstruction accuracy that it confers over feature-based bundle adjustment is demonstrated on the large-scale Tanks & Temples benchmark.
- Score: 9.184692492399686
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Direct methods have shown promise on visual odometry and SLAM, leading to
greater accuracy and robustness over feature-based methods. However, offline
3-d reconstruction from internet images has not yet benefited from a joint,
photometric optimization over dense geometry and camera parameters. Issues such
as the lack of brightness constancy, and the sheer volume of data, make this a
more challenging task. This work presents a framework for jointly optimizing
millions of scene points and hundreds of camera poses and intrinsics, using a
photometric cost that is invariant to local lighting changes. The improvement
in metric reconstruction accuracy that it confers over feature-based bundle
adjustment is demonstrated on the large-scale Tanks & Temples benchmark. We
further demonstrate qualitative reconstruction improvements on an internet
photo collection, with challenging diversity in lighting and camera intrinsics.
Related papers
- CRAYM: Neural Field Optimization via Camera RAY Matching [48.25100687172752]
We introduce camera ray matching (CRAYM) into the joint optimization of camera poses and neural fields from multi-view images.
We formulate our per-ray optimization and matched ray coherence by focusing on camera rays passing through keypoints in the input images.
arXiv Detail & Related papers (2024-12-02T15:39:09Z) - A Non-Uniform Low-Light Image Enhancement Method with Multi-Scale
Attention Transformer and Luminance Consistency Loss [11.585269110131659]
Low-light image enhancement aims to improve the perception of images collected in dim environments.
Existing methods cannot adaptively extract the differentiated luminance information, which will easily cause over-exposure and under-exposure.
We propose a multi-scale attention Transformer named MSATr, which sufficiently extracts local and global features for light balance to improve the visual quality.
arXiv Detail & Related papers (2023-12-27T10:07:11Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - Multi-View Neural Surface Reconstruction with Structured Light [7.709526244898887]
Three-dimensional (3D) object reconstruction based on differentiable rendering (DR) is an active research topic in computer vision.
We introduce active sensing with structured light (SL) into multi-view 3D object reconstruction based on DR to learn the unknown geometry and appearance of arbitrary scenes and camera poses.
Our method realizes high reconstruction accuracy in the textureless region and reduces efforts for camera pose calibration.
arXiv Detail & Related papers (2022-11-22T03:10:46Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Coded Illumination for Improved Lensless Imaging [22.992552346745523]
We propose to use coded illumination to improve the quality of images reconstructed with lensless cameras.
In our imaging model, the scene/object is illuminated by multiple coded illumination patterns as the lensless camera records sensor measurements.
We propose a fast and low-complexity recovery algorithm that exploits the separability and block-diagonal structure in our system.
arXiv Detail & Related papers (2021-11-25T01:22:40Z) - Shape and Reflectance Reconstruction in Uncontrolled Environments by
Differentiable Rendering [27.41344744849205]
We propose an efficient method to reconstruct the scene's 3D geometry and reflectance from multi-view photography using conventional hand-held cameras.
Our method also shows superior performance compared to state-of-the-art alternatives in novel view visually synthesis and quantitatively.
arXiv Detail & Related papers (2021-10-25T14:09:10Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Towards High Fidelity Monocular Face Reconstruction with Rich
Reflectance using Self-supervised Learning and Ray Tracing [49.759478460828504]
Methods combining deep neural network encoders with differentiable rendering have opened up the path for very fast monocular reconstruction of geometry, lighting and reflectance.
ray tracing was introduced for monocular face reconstruction within a classic optimization-based framework.
We propose a new method that greatly improves reconstruction quality and robustness in general scenes.
arXiv Detail & Related papers (2021-03-29T08:58:10Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.