3d sequential image mosaicing for underwater navigation and mapping
- URL: http://arxiv.org/abs/2110.01382v1
- Date: Mon, 4 Oct 2021 12:32:51 GMT
- Title: 3d sequential image mosaicing for underwater navigation and mapping
- Authors: E. Nocerino (LIS), F. Menna (FBK), B. Chemisky (LIS), P. Drap (LIS)
- Abstract summary: We propose a modified image mosaicing algorithm that coupled with image-based real-time navigation and mapping algorithms provides two visual navigation aids.
The implemented procedure is detailed, and experiments in different underwater scenarios presented and discussed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although fully autonomous mapping methods are becoming more and more common
and reliable, still the human operator is regularly employed in many 3D
surveying missions. In a number of underwater applications, divers or pilots of
remotely operated vehicles (ROVs) are still considered irreplaceable, and tools
for real-time visualization of the mapped scene are essential to support and
maximize the navigation and surveying efforts. For underwater exploration,
image mosaicing has proved to be a valid and effective approach to visualize
large mapped areas, often employed in conjunction with autonomous underwater
vehicles (AUVs) and ROVs. In this work, we propose the use of a modified image
mosaicing algorithm that coupled with image-based real-time navigation and
mapping algorithms provides two visual navigation aids. The first is a classic
image mosaic, where the recorded and processed images are incrementally added,
named 2D sequential image mosaicing (2DSIM). The second one geometrically
transform the images so that they are projected as planar point clouds in the
3D space providing an incremental point cloud mosaicing, named 3D sequential
image plane projection (3DSIP). In the paper, the implemented procedure is
detailed, and experiments in different underwater scenarios presented and
discussed. Technical considerations about computational efforts, frame rate
capabilities and scalability to different and more compact architectures (i.e.
embedded systems) is also provided.
Related papers
- Visibility-Uncertainty-guided 3D Gaussian Inpainting via Scene Conceptional Learning [63.94919846010485]
3D Gaussian inpainting (3DGI) is challenging in effectively leveraging complementary visual and semantic cues from multiple input views.
We propose a method that measures the visibility uncertainties of 3D points across different input views and uses them to guide 3DGI.
We build a novel 3DGI framework, VISTA, by integrating VISibility-uncerTainty-guided 3DGI with scene conceptuAl learning.
arXiv Detail & Related papers (2025-04-23T06:21:11Z) - Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - VFMM3D: Releasing the Potential of Image by Vision Foundation Model for Monocular 3D Object Detection [80.62052650370416]
monocular 3D object detection holds significant importance across various applications, including autonomous driving and robotics.
In this paper, we present VFMM3D, an innovative framework that leverages the capabilities of Vision Foundation Models (VFMs) to accurately transform single-view images into LiDAR point cloud representations.
arXiv Detail & Related papers (2024-04-15T03:12:12Z) - SimDistill: Simulated Multi-modal Distillation for BEV 3D Object
Detection [56.24700754048067]
Multi-view camera-based 3D object detection has become popular due to its low cost, but accurately inferring 3D geometry solely from camera data remains challenging.
We propose a Simulated multi-modal Distillation (SimDistill) method by carefully crafting the model architecture and distillation strategy.
Our SimDistill can learn better feature representations for 3D object detection while maintaining a cost-effective camera-only deployment.
arXiv Detail & Related papers (2023-03-29T16:08:59Z) - Real Time Incremental Image Mosaicking Without Use of Any Camera
Parameter [1.2891210250935146]
This paper proposes a UAV-based system for real-time creation of incremental mosaics.
Inspired by previous approaches, in the mosaicking process, feature extraction from images, matching of similar key points between images, finding homography matrix to warp and align images, and blending images to obtain mosaics better looking.
arXiv Detail & Related papers (2022-12-05T14:28:54Z) - PointMCD: Boosting Deep Point Cloud Encoders via Multi-view Cross-modal
Distillation for 3D Shape Recognition [55.38462937452363]
We propose a unified multi-view cross-modal distillation architecture, including a pretrained deep image encoder as the teacher and a deep point encoder as the student.
By pair-wise aligning multi-view visual and geometric descriptors, we can obtain more powerful deep point encoders without exhausting and complicated network modification.
arXiv Detail & Related papers (2022-07-07T07:23:20Z) - TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view
Stereo [55.30992853477754]
We present TANDEM, a real-time monocular tracking and dense framework.
For pose estimation, TANDEM performs photometric bundle adjustment based on a sliding window of alignments.
TANDEM shows state-of-the-art real-time 3D reconstruction performance.
arXiv Detail & Related papers (2021-11-14T19:01:02Z) - Hyperspectral 3D Mapping of Underwater Environments [0.7087237546722617]
We present an initial method for creating hyperspectral 3D reconstructions of underwater environments.
By fusing the data gathered by a classical RGB camera, an inertial navigation system and a hyperspectral push-broom camera, we show that the proposed method creates highly accurate 3D reconstructions with hyperspectral textures.
arXiv Detail & Related papers (2021-10-13T08:37:22Z) - Real-time dense 3D Reconstruction from monocular video data captured by
low-cost UAVs [0.3867363075280543]
Real-time 3D reconstruction enables fast dense mapping of the environment which benefits numerous applications, such as navigation or live evaluation of an emergency.
In contrast to most real-time capable approaches, our approach does not need an explicit depth sensor.
By exploiting the self-motion of the unmanned aerial vehicle (UAV) flying with oblique view around buildings, we estimate both camera trajectory and depth for selected images with enough novel content.
arXiv Detail & Related papers (2021-04-21T13:12:17Z) - Mesh Reconstruction from Aerial Images for Outdoor Terrain Mapping Using
Joint 2D-3D Learning [12.741811850885309]
This paper addresses outdoor terrain mapping using overhead images obtained from an unmanned aerial vehicle.
Dense depth estimation from aerial images during flight is challenging.
We develop a joint 2D-3D learning approach to reconstruct local meshes at each camera, which can be assembled into a global environment model.
arXiv Detail & Related papers (2021-01-06T02:09:03Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - DeepURL: Deep Pose Estimation Framework for Underwater Relative
Localization [21.096166727043077]
We propose a real-time deep learning approach for determining the 6D relative pose of Autonomous Underwater Vehicles (AUV) from a single image.
An image-to-image translation network is employed to bridge the gap between the rendered real images producing synthetic images for training.
arXiv Detail & Related papers (2020-03-11T21:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.