Set-Transformer BeamsNet for AUV Velocity Forecasting in Complete DVL
Outage Scenarios
- URL: http://arxiv.org/abs/2212.11671v1
- Date: Thu, 22 Dec 2022 13:10:44 GMT
- Title: Set-Transformer BeamsNet for AUV Velocity Forecasting in Complete DVL
Outage Scenarios
- Authors: Nadav Cohen, Zeev Yampolsky and Itzik Klein
- Abstract summary: We propose a Set-Transformer-based BeamsNet to regress the current AUV velocity in case of a complete DVL outage.
Our approach was evaluated using data from experiments held in the Mediterranean Sea with the Snapir AUV.
- Score: 10.64241024049424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous underwater vehicles (AUVs) are regularly used for deep ocean
applications. Commonly, the autonomous navigation task is carried out by a
fusion between two sensors: the inertial navigation system and the Doppler
velocity log (DVL). The DVL operates by transmitting four acoustic beams to the
sea floor, and once reflected back, the AUV velocity vector can be estimated.
However, in real-life scenarios, such as an uneven seabed, sea creatures
blocking the DVL's view and, roll/pitch maneuvers, the acoustic beams'
reflection is resulting in a scenario known as DVL outage. Consequently, a
velocity update is not available to bind the inertial solution drift. To cope
with such situations, in this paper, we leverage our BeamsNet framework and
propose a Set-Transformer-based BeamsNet (ST-BeamsNet) that utilizes inertial
data readings and previous DVL velocity measurements to regress the current AUV
velocity in case of a complete DVL outage. The proposed approach was evaluated
using data from experiments held in the Mediterranean Sea with the Snapir AUV
and was compared to a moving average (MA) estimator. Our ST-BeamsNet estimated
the AUV velocity vector with an 8.547% speed error, which is 26% better than
the MA approach.
Related papers
- DCNet: A Data-Driven Framework for DVL Calibration [2.915868985330569]
We introduce DCNet, a data-driven framework that utilizes a two-dimensional convolution kernel in an innovative way.
We demonstrate an average improvement of 70% in accuracy and 80% improvement in calibration time, compared to the baseline approach.
Our results also open up new applications for marine robotics utilizing low-cost, high-accurate DVLs.
arXiv Detail & Related papers (2024-10-11T13:47:40Z) - Seamless Underwater Navigation with Limited Doppler Velocity Log Measurements [13.221163846643607]
We propose a hybrid neural coupled (HNC) approach for seamless AUV navigation in situations of limited DVL measurements.
First, we drive an approach to regress two or three missing DVL beams.
Then, those beams, together with the measured beams, are incorporated into the extended Kalman filter (EKF)
arXiv Detail & Related papers (2024-04-21T18:56:54Z) - BEV-MAE: Bird's Eye View Masked Autoencoders for Point Cloud
Pre-training in Autonomous Driving Scenarios [51.285561119993105]
We present BEV-MAE, an efficient masked autoencoder pre-training framework for LiDAR-based 3D object detection in autonomous driving.
Specifically, we propose a bird's eye view (BEV) guided masking strategy to guide the 3D encoder learning feature representation.
We introduce a learnable point token to maintain a consistent receptive field size of the 3D encoder.
arXiv Detail & Related papers (2022-12-12T08:15:03Z) - LiBeamsNet: AUV Velocity Vector Estimation in Situations of Limited DVL
Beam Measurements [12.572597882082054]
AUVs can operate in deep underwater environments beyond human reach.
A standard solution for the autonomous navigation problem can be obtained by fusing the inertial navigation system and the Doppler velocity log sensor.
In this paper we propose a deep learning framework, LiBeamsNet, that utilizes the inertial data and the partial beam velocities to regress the missing beams.
arXiv Detail & Related papers (2022-10-20T20:17:23Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - BeamsNet: A data-driven Approach Enhancing Doppler Velocity Log
Measurements for Autonomous Underwater Vehicle Navigation [12.572597882082054]
This paper proposes BeamsNet, an end-to-end deep learning framework to regress the estimated DVL velocity vector.
Our results show that the proposed approach achieved an improvement of more than 60% in estimating the DVL velocity vector.
arXiv Detail & Related papers (2022-06-27T19:38:38Z) - An Empirical Study of Training End-to-End Vision-and-Language
Transformers [50.23532518166621]
We present METER(textbfMultimodal textbfEnd-to-end textbfTransformtextbfER), through which we investigate how to design and pre-train a fully transformer-based VL model.
Specifically, we dissect the model designs along multiple dimensions: vision encoders (e.g., CLIP-ViT, Swin transformer), text encoders (e.g., RoBERTa, DeBERTa), multimodal fusion (e.g., merged attention vs. co-
arXiv Detail & Related papers (2021-11-03T17:55:36Z) - A Deep Learning Approach To Dead-Reckoning Navigation For Autonomous
Underwater Vehicles With Limited Sensor Payloads [0.0]
A Recurrent Neural Network (RNN) was developed to predict the relative horizontal velocities of an Autonomous Underwater Vehicle (AUV)
The RNN network is trained using experimental data, where a doppler velocity logger (DVL) provided ground truth velocities.
The predictions of the relative velocities were implemented in a dead-reckoning algorithm to approximate north and east positions.
arXiv Detail & Related papers (2021-10-01T21:40:10Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.