3D Reconstruction through Fusion of Cross-View Images
- URL: http://arxiv.org/abs/2106.14306v1
- Date: Sun, 27 Jun 2021 18:31:08 GMT
- Title: 3D Reconstruction through Fusion of Cross-View Images
- Authors: Rongjun Qin, Shuang Song, Xiao Ling, Mostafa Elhashash
- Abstract summary: 3D recovery from multi-stereo and stereo images serves many applications in computer vision, remote sensing and Geomatics.
We introduce our framework that takes ground-view images and satellite images for full 3D recovery.
We demonstrate our proposed framework on a dataset consisting of twelve satellite images and 150k video frames acquired through a vehicle-mounted Go-pro camera.
- Score: 4.644618399001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D recovery from multi-stereo and stereo images, as an important application
of the image-based perspective geometry, serves many applications in computer
vision, remote sensing and Geomatics. In this chapter, the authors utilize the
imaging geometry and present approaches that perform 3D reconstruction from
cross-view images that are drastically different in their viewpoints. We
introduce our framework that takes ground-view images and satellite images for
full 3D recovery, which includes necessary methods in satellite and
ground-based point cloud generation from images, 3D data co-registration,
fusion and mesh generation. We demonstrate our proposed framework on a dataset
consisting of twelve satellite images and 150k video frames acquired through a
vehicle-mounted Go-pro camera and demonstrate the reconstruction results. We
have also compared our results with results generated from an intuitive
processing pipeline that involves typical geo-registration and meshing methods.
Related papers
- MaRINeR: Enhancing Novel Views by Matching Rendered Images with Nearby References [49.71130133080821]
MaRINeR is a refinement method that leverages information of a nearby mapping image to improve the rendering of a target viewpoint.
We show improved renderings in quantitative metrics and qualitative examples from both explicit and implicit scene representations.
arXiv Detail & Related papers (2024-07-18T17:50:03Z) - GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - LAM3D: Large Image-Point-Cloud Alignment Model for 3D Reconstruction from Single Image [64.94932577552458]
Large Reconstruction Models have made significant strides in the realm of automated 3D content generation from single or multiple input images.
Despite their success, these models often produce 3D meshes with geometric inaccuracies, stemming from the inherent challenges of deducing 3D shapes solely from image data.
We introduce a novel framework, the Large Image and Point Cloud Alignment Model (LAM3D), which utilizes 3D point cloud data to enhance the fidelity of generated 3D meshes.
arXiv Detail & Related papers (2024-05-24T15:09:12Z) - Reconstructing Satellites in 3D from Amateur Telescope Images [44.20773507571372]
This paper proposes a framework for the 3D reconstruction of satellites in low-Earth orbit, utilizing videos captured by small amateur telescopes.
The video data obtained from these telescopes differ significantly from data for standard 3D reconstruction tasks, characterized by intense motion blur, atmospheric turbulence, pervasive background light pollution, extended focal length and constrained observational perspectives.
We validate our approach using both synthetic datasets and actual observations of China's Space Station, showcasing its significant advantages over existing methods in reconstructing 3D space objects from ground-based observations.
arXiv Detail & Related papers (2024-04-29T03:13:09Z) - gSDF: Geometry-Driven Signed Distance Functions for 3D Hand-Object
Reconstruction [94.46581592405066]
We exploit the hand structure and use it as guidance for SDF-based shape reconstruction.
We predict kinematic chains of pose transformations and align SDFs with highly-articulated hand poses.
arXiv Detail & Related papers (2023-04-24T10:05:48Z) - 3D reconstruction from spherical images: A review of techniques,
applications, and prospects [2.6432771146480283]
3D reconstruction plays an increasingly important role in modern photogrammetric systems.
With the rapid evolution and extensive use of professional and consumer-grade spherical cameras, spherical images show great potential for the 3D modeling of urban and indoor scenes.
This research provides a thorough survey of the state-of-the-art for 3D reconstruction of spherical images in terms of data acquisition, feature detection and matching, image orientation, and dense matching.
arXiv Detail & Related papers (2023-02-09T08:45:27Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using
Joint 2D-3D Learning [12.741811850885309]
This paper addresses outdoor terrain mapping using overhead images obtained from an unmanned aerial vehicle.
Dense depth estimation from aerial images during flight is challenging.
We develop a joint 2D-3D learning approach to reconstruct local meshes at each camera, which can be assembled into a global environment model.
arXiv Detail & Related papers (2021-01-06T02:09:03Z) - Photometric Multi-View Mesh Refinement for High-Resolution Satellite
Images [24.245977127434212]
State-of-the-art reconstruction methods typically generate 2.5D elevation data.
We present an approach to recover full 3D surface meshes from multi-view satellite imagery.
arXiv Detail & Related papers (2020-05-10T20:37:54Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.