Investigating Spherical Epipolar Rectification for Multi-View Stereo 3D
Reconstruction
- URL: http://arxiv.org/abs/2204.04141v1
- Date: Fri, 8 Apr 2022 15:50:20 GMT
- Title: Investigating Spherical Epipolar Rectification for Multi-View Stereo 3D
Reconstruction
- Authors: Mostafa Elhashash, Rongjun Qin
- Abstract summary: We propose a spherical model for epipolar rectification to minimize distortions caused by differences in principal rays.
We show through qualitative and quantitative evaluation that the proposed approach performs better than frame-based epipolar correction.
- Score: 1.0152838128195467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-view stereo (MVS) reconstruction is essential for creating 3D models.
The approach involves applying epipolar rectification followed by dense
matching for disparity estimation. However, existing approaches face challenges
in applying dense matching for images with different viewpoints primarily due
to large differences in object scale. In this paper, we propose a spherical
model for epipolar rectification to minimize distortions caused by differences
in principal rays. We evaluate the proposed approach using two aerial-based
datasets consisting of multi-camera head systems. We show through qualitative
and quantitative evaluation that the proposed approach performs better than
frame-based epipolar correction by enhancing the completeness of point clouds
by up to 4.05% while improving the accuracy by up to 10.23% using LiDAR data as
ground truth.
Related papers
- PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.
Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - GEOcc: Geometrically Enhanced 3D Occupancy Network with Implicit-Explicit Depth Fusion and Contextual Self-Supervision [49.839374549646884]
This paper presents GEOcc, a Geometric-Enhanced Occupancy network tailored for vision-only surround-view perception.
Our approach achieves State-Of-The-Art performance on the Occ3D-nuScenes dataset with the least image resolution needed and the most weightless image backbone.
arXiv Detail & Related papers (2024-05-17T07:31:20Z) - MonoPatchNeRF: Improving Neural Radiance Fields with Patch-based Monocular Guidance [29.267039546199094]
The latest regularized Neural Radiance Field (NeRF) approaches produce poor geometry and view extrapolation for large scale sparse view scenes.
We take a density-based approach, sampling patches instead of individual rays to better incorporate monocular depth and normal estimates.
Our approach significantly improves geometric accuracy on the ETH3D benchmark.
arXiv Detail & Related papers (2024-04-12T05:43:10Z) - Q-SLAM: Quadric Representations for Monocular SLAM [89.05457684629621]
Monocular SLAM has long grappled with the challenge of accurately modeling 3D geometries.
Recent advances in Neural Radiance Fields (NeRF)-based monocular SLAM have shown promise.
We propose a novel approach that reimagines volumetric representations through the lens of quadric forms.
arXiv Detail & Related papers (2024-03-12T23:27:30Z) - SD-MVS: Segmentation-Driven Deformation Multi-View Stereo with Spherical
Refinement and EM optimization [6.886220026399106]
We introduce Multi-View Stereo (SD-MVS) to tackle challenges in 3D reconstruction of textureless areas.
We are the first to adopt the Segment Anything Model (SAM) to distinguish semantic instances in scenes.
We propose a unique refinement strategy that combines spherical coordinates and gradient descent on normals and pixelwise search interval on depths.
arXiv Detail & Related papers (2024-01-12T05:25:57Z) - RNb-NeuS: Reflectance and Normal-based Multi-View 3D Reconstruction [3.1820300989695833]
This paper introduces a versatile paradigm for integrating multi-view reflectance and normal maps acquired through photometric stereo.
Our approach employs a pixel-wise joint re- parameterization of reflectance and normal, considering them as a vector of radiances rendered under simulated, varying illumination.
It significantly improves the detailed 3D reconstruction of areas with high curvature or low visibility.
arXiv Detail & Related papers (2023-12-02T19:49:27Z) - Leveraging Monocular Disparity Estimation for Single-View Reconstruction [8.583436410810203]
We leverage advances in monocular depth estimation to obtain disparity maps.
We transform 2D normalized disparity maps into 3D point clouds by solving an optimization on the relevant camera parameters.
arXiv Detail & Related papers (2022-07-01T03:05:40Z) - A Model for Multi-View Residual Covariances based on Perspective
Deformation [88.21738020902411]
We derive a model for the covariance of the visual residuals in multi-view SfM, odometry and SLAM setups.
We validate our model with synthetic and real data and integrate it into photometric and feature-based Bundle Adjustment.
arXiv Detail & Related papers (2022-02-01T21:21:56Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual
Odometry [57.5549733585324]
D3VO is a novel framework for monocular visual odometry that exploits deep networks on three levels -- deep depth, pose and uncertainty estimation.
We first propose a novel self-supervised monocular depth estimation network trained on stereo videos without any external supervision.
We model the photometric uncertainties of pixels on the input images, which improves the depth estimation accuracy.
arXiv Detail & Related papers (2020-03-02T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.