Nonlinear Intensity Sonar Image Matching based on Deep Convolution
Features
- URL: http://arxiv.org/abs/2111.08994v1
- Date: Wed, 17 Nov 2021 09:30:43 GMT
- Title: Nonlinear Intensity Sonar Image Matching based on Deep Convolution
Features
- Authors: Xiaoteng Zhou, Changli Yu, Xin Yuan, Yi Wu, Haijun Feng, Citong Luo
- Abstract summary: This paper proposes a combined matching method based on phase information and deep convolution features.
It has two outstanding advantages: one is that deep convolution features could be used to measure the similarity of the local and global positions of the sonar image.
- Score: 10.068137357857134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of deep-sea exploration, sonar is presently the only efficient
long-distance sensing device. The complicated underwater environment, such as
noise interference, low target intensity or background dynamics, has brought
many negative effects on sonar imaging. Among them, the problem of nonlinear
intensity is extremely prevalent. It is also known as the anisotropy of
acoustic imaging, that is, when AUVs carry sonar to detect the same target from
different angles, the intensity difference between image pairs is sometimes
very large, which makes the traditional matching algorithm almost ineffective.
However, image matching is the basis of comprehensive tasks such as navigation,
positioning, and mapping. Therefore, it is very valuable to obtain robust and
accurate matching results. This paper proposes a combined matching method based
on phase information and deep convolution features. It has two outstanding
advantages: one is that deep convolution features could be used to measure the
similarity of the local and global positions of the sonar image; the other is
that local feature matching could be performed at the key target position of
the sonar image. This method does not need complex manual design, and completes
the matching task of nonlinear intensity sonar images in a close end-to-end
manner. Feature matching experiments are carried out on the deep-sea sonar
images captured by AUVs, and the results show that our proposal has good
matching accuracy and robustness.
Related papers
- Adaptive Stereo Depth Estimation with Multi-Spectral Images Across All Lighting Conditions [58.88917836512819]
We propose a novel framework incorporating stereo depth estimation to enforce accurate geometric constraints.
To mitigate the effects of poor lighting on stereo matching, we introduce Degradation Masking.
Our method achieves state-of-the-art (SOTA) performance on the Multi-Spectral Stereo (MS2) dataset.
arXiv Detail & Related papers (2024-11-06T03:30:46Z) - A Robust Multisource Remote Sensing Image Matching Method Utilizing Attention and Feature Enhancement Against Noise Interference [15.591520484047914]
We propose a robust multisource remote sensing image matching method utilizing attention and feature enhancement against noise interference.
In the first stage, we combine deep convolution with the attention mechanism of transformer to perform dense feature extraction.
In the second stage, we introduce an outlier removal network based on a binary classification mechanism.
arXiv Detail & Related papers (2024-10-01T03:35:34Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Deep Uncalibrated Photometric Stereo via Inter-Intra Image Feature
Fusion [17.686973510425172]
This paper presents a new method for deep uncalibrated photometric stereo.
It efficiently utilizes the inter-image representation to guide the normal estimation.
Our method produces significantly better results than the state-of-the-art methods on both synthetic and real data.
arXiv Detail & Related papers (2022-08-06T03:59:54Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Nonlinear Intensity Underwater Sonar Image Matching Method Based on
Phase Information and Deep Convolution Features [6.759506053568929]
This paper proposes a combined matching method based on phase information and deep convolution features.
It has two outstanding advantages: one is that the deep convolution features could be used to measure the similarity of the local and global positions of the sonar image.
arXiv Detail & Related papers (2021-11-29T02:36:49Z) - A Matching Algorithm based on Image Attribute Transfer and Local
Features for Underwater Acoustic and Optical Images [6.134248551458372]
This study applies the image attribute transfer method based on deep learning approach to solve the problem of acousto-optic image matching.
Experimental results show that our proposed method could preprocess acousto-optic images effectively and obtain accurate matching results.
arXiv Detail & Related papers (2021-08-27T07:50:09Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Monocular Depth Parameterizing Networks [15.791732557395552]
We propose a network structure that provides a parameterization of a set of depth maps with feasible shapes.
This allows us to search the shapes for a photo consistent solution with respect to other images.
Our experimental evaluation shows that our method generates more accurate depth maps and generalizes better than competing state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-21T13:02:41Z) - Robust Consistent Video Depth Estimation [65.53308117778361]
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video.
Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details.
In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations.
arXiv Detail & Related papers (2020-12-10T18:59:48Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.