3D Reconstruction of Spherical Images based on Incremental Structure
from Motion
- URL: http://arxiv.org/abs/2306.12770v2
- Date: Sat, 24 Jun 2023 11:00:15 GMT
- Title: 3D Reconstruction of Spherical Images based on Incremental Structure
from Motion
- Authors: San Jiang, Kan You, Yaxin Li, Duojie Weng, Wu Chen
- Abstract summary: This study investigates the algorithms for the relative orientation using spherical correspondences, absolute orientation using 3D correspondences between scene and spherical points, and the cost functions for BA (bundle adjustment) optimization.
An incremental SfM (Structure from Motion) workflow has been proposed for spherical images using the above-mentioned algorithms.
- Score: 2.6432771146480283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D reconstruction plays an increasingly important role in modern
photogrammetric systems. Conventional satellite or aerial-based remote sensing
(RS) platforms can provide the necessary data sources for the 3D reconstruction
of large-scale landforms and cities. Even with low-altitude UAVs (Unmanned
Aerial Vehicles), 3D reconstruction in complicated situations, such as urban
canyons and indoor scenes, is challenging due to the frequent tracking failures
between camera frames and high data collection costs. Recently, spherical
images have been extensively exploited due to the capability of recording
surrounding environments from one camera exposure. Classical 3D reconstruction
pipelines, however, cannot be used for spherical images. Besides, there exist
few software packages for 3D reconstruction of spherical images. Based on the
imaging geometry of spherical cameras, this study investigates the algorithms
for the relative orientation using spherical correspondences, absolute
orientation using 3D correspondences between scene and spherical points, and
the cost functions for BA (bundle adjustment) optimization. In addition, an
incremental SfM (Structure from Motion) workflow has been proposed for
spherical images using the above-mentioned algorithms. The proposed solution is
finally verified by using three spherical datasets captured by both
consumer-grade and professional spherical cameras. The results demonstrate that
the proposed SfM workflow can achieve the successful 3D reconstruction of
complex scenes and provide useful clues for the implementation in open-source
software packages. The source code of the designed SfM workflow would be made
publicly available.
Related papers
- GS-Blur: A 3D Scene-Based Dataset for Realistic Image Deblurring [50.72230109855628]
We propose GS-Blur, a dataset of synthesized realistic blurry images created using a novel approach.
We first reconstruct 3D scenes from multi-view images using 3D Gaussian Splatting (3DGS), then render blurry images by moving the camera view along the randomly generated motion trajectories.
By adopting various camera trajectories in reconstructing our GS-Blur, our dataset contains realistic and diverse types of blur, offering a large-scale dataset that generalizes well to real-world blur.
arXiv Detail & Related papers (2024-10-31T06:17:16Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - Visual SLAM with 3D Gaussian Primitives and Depth Priors Enabling Novel View Synthesis [11.236094544193605]
Conventional geometry-based SLAM systems lack dense 3D reconstruction capabilities.
We propose a real-time RGB-D SLAM system that incorporates a novel view synthesis technique, 3D Gaussian Splatting.
arXiv Detail & Related papers (2024-08-10T21:23:08Z) - Reconstructing Satellites in 3D from Amateur Telescope Images [44.20773507571372]
This paper proposes a framework for the 3D reconstruction of satellites in low-Earth orbit, utilizing videos captured by small amateur telescopes.
The video data obtained from these telescopes differ significantly from data for standard 3D reconstruction tasks, characterized by intense motion blur, atmospheric turbulence, pervasive background light pollution, extended focal length and constrained observational perspectives.
We validate our approach using both synthetic datasets and actual observations of China's Space Station, showcasing its significant advantages over existing methods in reconstructing 3D space objects from ground-based observations.
arXiv Detail & Related papers (2024-04-29T03:13:09Z) - R3D3: Dense 3D Reconstruction of Dynamic Scenes from Multiple Cameras [106.52409577316389]
R3D3 is a multi-camera system for dense 3D reconstruction and ego-motion estimation.
Our approach exploits spatial-temporal information from multiple cameras, and monocular depth refinement.
We show that this design enables a dense, consistent 3D reconstruction of challenging, dynamic outdoor environments.
arXiv Detail & Related papers (2023-08-28T17:13:49Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - A Comparative Neural Radiance Field (NeRF) 3D Analysis of Camera Poses
from HoloLens Trajectories and Structure from Motion [0.0]
We present a workflow for high-resolution 3D reconstructions almost directly from HoloLens data using Neural Radiance Fields (NeRFs)
NeRFs are trained using a set of camera poses and associated images as input to estimate density and color values for each position.
Results show that the internal camera poses lead to NeRF convergence with a PSNR of 25,dB with a simple rotation around the x-axis and enable a 3D reconstruction.
arXiv Detail & Related papers (2023-04-20T22:17:28Z) - 3D reconstruction from spherical images: A review of techniques,
applications, and prospects [2.6432771146480283]
3D reconstruction plays an increasingly important role in modern photogrammetric systems.
With the rapid evolution and extensive use of professional and consumer-grade spherical cameras, spherical images show great potential for the 3D modeling of urban and indoor scenes.
This research provides a thorough survey of the state-of-the-art for 3D reconstruction of spherical images in terms of data acquisition, feature detection and matching, image orientation, and dense matching.
arXiv Detail & Related papers (2023-02-09T08:45:27Z) - BS3D: Building-scale 3D Reconstruction from RGB-D Images [25.604775584883413]
We propose an easy-to-use framework for acquiring building-scale 3D reconstruction using a consumer depth camera.
Unlike complex and expensive acquisition setups, our system enables crowd-sourcing, which can greatly benefit data-hungry algorithms.
arXiv Detail & Related papers (2023-01-03T11:46:14Z) - Towards Non-Line-of-Sight Photography [48.491977359971855]
Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects.
Active NLOS imaging systems rely on the capture of the time of flight of light through the scene.
We propose a new problem formulation, called NLOS photography, to specifically address this deficiency.
arXiv Detail & Related papers (2021-09-16T08:07:13Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.