Neural Network Normal Estimation and Bathymetry Reconstruction from
Sidescan Sonar
- URL: http://arxiv.org/abs/2206.07819v1
- Date: Wed, 15 Jun 2022 21:12:17 GMT
- Title: Neural Network Normal Estimation and Bathymetry Reconstruction from
Sidescan Sonar
- Authors: Yiping Xie, Nils Bore and John Folkesson
- Abstract summary: Implicit neural representation learning was recently proposed to represent the bathymetric map in such an optimization framework.
In this article, we use a neural network to represent the map and optimize it under constraints of altimeter points and estimated surface normal from sidescan.
We demonstrate the efficiency and scalability of the approach by reconstructing a high-quality bathymetry using sidescan data from a large sidescan survey.
- Score: 3.2872586139884623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sidescan sonar intensity encodes information about the changes of surface
normal of the seabed. However, other factors such as seabed geometry as well as
its material composition also affect the return intensity. One can model these
intensity changes in a forward direction from the surface normals from
bathymetric map and physical properties to the measured intensity or
alternatively one can use an inverse model which starts from the intensities
and models the surface normals. Here we use an inverse model which leverages
deep learning's ability to learn from data; a convolutional neural network is
used to estimate the surface normal from the sidescan. Thus the internal
properties of the seabed are only implicitly learned. Once this information is
estimated, a bathymetric map can be reconstructed through an optimization
framework that also includes altimeter readings to provide a sparse depth
profile as a constraint. Implicit neural representation learning was recently
proposed to represent the bathymetric map in such an optimization framework. In
this article, we use a neural network to represent the map and optimize it
under constraints of altimeter points and estimated surface normal from
sidescan. By fusing multiple observations from different angles from several
sidescan lines, the estimated results are improved through optimization. We
demonstrate the efficiency and scalability of the approach by reconstructing a
high-quality bathymetry using sidescan data from a large sidescan survey. We
compare the proposed data-driven inverse model approach of modeling a sidescan
with a forward Lambertian model. We assess the quality of each reconstruction
by comparing it with data constructed from a multibeam sensor. We are thus able
to discuss the strengths and weaknesses of each approach.
Related papers
- ND-SDF: Learning Normal Deflection Fields for High-Fidelity Indoor Reconstruction [50.07671826433922]
It is non-trivial to simultaneously recover meticulous geometry and preserve smoothness across regions with differing characteristics.
We propose ND-SDF, which learns a Normal Deflection field to represent the angular deviation between the scene normal and the prior normal.
Our method not only obtains smooth weakly textured regions such as walls and floors but also preserves the geometric details of complex structures.
arXiv Detail & Related papers (2024-08-22T17:59:01Z) - Linear Anchored Gaussian Mixture Model for Location and Width Computations of Objects in Thick Line Shape [1.7205106391379021]
3D image gray level representation is considered as a finite mixture model of a statistical distribution.
Expectation-Maximization algorithm (Algo1) using the original image as input data is used to estimate the model parameters.
modified EM algorithm (Algo2) is detailed.
arXiv Detail & Related papers (2024-04-03T20:05:00Z) - Q-SLAM: Quadric Representations for Monocular SLAM [85.82697759049388]
We reimagine volumetric representations through the lens of quadrics.
We use quadric assumption to rectify noisy depth estimations from RGB inputs.
We introduce a novel quadric-decomposed transformer to aggregate information across quadrics.
arXiv Detail & Related papers (2024-03-12T23:27:30Z) - Estimation of Physical Parameters of Waveforms With Neural Networks [0.8142555609235358]
The potential of Full Waveform LiDAR is much greater than just height estimation and 3D reconstruction only.
Existing techniques in the field of LiDAR data analysis include depth estimation through inverse modeling and regression of logarithmic intensity and depth for approximating the attenuation coefficient.
This research proposed a novel solution based on neural networks for parameter estimation in LIDAR data analysis.
arXiv Detail & Related papers (2023-12-05T22:54:32Z) - Estimating Neural Reflectance Field from Radiance Field using Tree
Structures [29.431165709718794]
We present a new method for estimating the Neural Reflectance Field (NReF) of an object from a set of posed multi-view images under unknown lighting.
NReF represents 3D geometry and appearance of objects in a disentangled manner, and are hard to be estimated from images only.
Our method solves this problem by exploiting the Neural Radiance Field (NeRF) as a proxy representation, from which we perform further decomposition.
arXiv Detail & Related papers (2022-10-09T10:21:31Z) - DeepWSD: Projecting Degradations in Perceptual Space to Wasserstein
Distance in Deep Feature Space [67.07476542850566]
We propose to model the quality degradation in perceptual space from a statistical distribution perspective.
The quality is measured based upon the Wasserstein distance in the deep feature domain.
The deep Wasserstein distance (DeepWSD) performed on features from neural networks enjoys better interpretability of the quality contamination.
arXiv Detail & Related papers (2022-08-05T02:46:12Z) - High-Resolution Bathymetric Reconstruction From Sidescan Sonar With Deep
Neural Networks [3.2872586139884623]
We propose a novel data-driven approach for high-resolution bathymetric reconstruction from sidescan.
We use a convolutional network to estimate the depth contour and its aleatoric uncertainty from the sidescan images and sparse depth.
A high-quality bathymetric map can be reconstructed after fusing the depth predictions and the corresponding confidence measures from the neural networks.
arXiv Detail & Related papers (2022-06-15T20:46:22Z) - Differentiable Diffusion for Dense Depth Estimation from Multi-view
Images [31.941861222005603]
We present a method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view reprojection error from RGB supervision.
We also develop an efficient optimization routine that can simultaneously optimize the 50k+ points required for complex scene reconstruction.
arXiv Detail & Related papers (2021-06-16T16:17:34Z) - Virtual Normal: Enforcing Geometric Constraints for Accurate and Robust
Depth Prediction [87.08227378010874]
We show the importance of the high-order 3D geometric constraints for depth prediction.
By designing a loss term that enforces a simple geometric constraint, we significantly improve the accuracy and robustness of monocular depth estimation.
We show state-of-the-art results of learning metric depth on NYU Depth-V2 and KITTI.
arXiv Detail & Related papers (2021-03-07T00:08:21Z) - GeoNet++: Iterative Geometric Neural Network with Edge-Aware Refinement
for Joint Depth and Surface Normal Estimation [204.13451624763735]
We propose a geometric neural network with edge-aware refinement (GeoNet++) to jointly predict both depth and surface normal maps from a single image.
GeoNet++ effectively predicts depth and surface normals with strong 3D consistency and sharp boundaries.
In contrast to current metrics that focus on evaluating pixel-wise error/accuracy, 3DGM measures whether the predicted depth can reconstruct high-quality 3D surface normals.
arXiv Detail & Related papers (2020-12-13T06:48:01Z) - DiverseDepth: Affine-invariant Depth Prediction Using Diverse Data [110.29043712400912]
We present a method for depth estimation with monocular images, which can predict high-quality depth on diverse scenes up to an affine transformation.
Experiments show that our method outperforms previous methods on 8 datasets by a large margin with the zero-shot test setting.
arXiv Detail & Related papers (2020-02-03T05:38:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.