Estimation of Sea State Parameters from Ship Motion Responses Using
Attention-based Neural Networks
- URL: http://arxiv.org/abs/2301.08949v1
- Date: Sat, 21 Jan 2023 13:21:50 GMT
- Title: Estimation of Sea State Parameters from Ship Motion Responses Using
Attention-based Neural Networks
- Authors: Denis Selimovi\'c, Franko Hr\v{z}i\'c, Jasna Prpi\'c-Or\v{s}i\'c,
Jonatan Lerga
- Abstract summary: We apply the novel attention-based neural network (AT-NN) for estimating sea state parameters from raw time-series data of ship pitch, heave, and roll motions.
It has been successfully demonstrated that the proposed approaches by modified state-of-the-art techniques reduced estimation MSE by 23% and MAE by 16% compared to the original methods.
- Score: 0.6193838300896448
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: On-site estimation of sea state parameters is crucial for ship navigation
systems' accuracy, stability, and efficiency. Extensive research has been
conducted on model-based estimating methods utilizing only ship motion
responses. Model-free approaches based on machine learning (ML) have recently
gained popularity, and estimation from time-series of ship motion responses
using deep learning (DL) methods has given promising results. Accordingly, in
this study, we apply the novel, attention-based neural network (AT-NN) for
estimating sea state parameters (wave height, zero-crossing period, and
relative wave direction) from raw time-series data of ship pitch, heave, and
roll motions. Despite using reduced input data, it has been successfully
demonstrated that the proposed approaches by modified state-of-the-art
techniques (based on convolutional neural networks (CNN) for regression,
multivariate long short-term memory CNN, and sliding puzzle neural network)
reduced estimation MSE by 23% and MAE by 16% compared to the original methods.
Furthermore, the proposed technique based on AT-NN outperformed all tested
methods (original and enhanced), reducing estimation MSE by up to 94% and MAE
by up to 70%. Finally, we also proposed a novel approach for interpreting the
uncertainty estimation of neural network outputs based on the Monte-Carlo
dropout method to enhance the model's trustworthiness.
Related papers
- Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Likelihood-Free Parameter Estimation with Neural Bayes Estimators [0.0]
Neural point estimators are neural networks that map data to parameter point estimates.
We aim to increase the awareness of statisticians to this relatively new inferential tool, and to facilitate its adoption by providing user-friendly open-source software.
arXiv Detail & Related papers (2022-08-27T06:58:16Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - DeepBayes -- an estimator for parameter estimation in stochastic
nonlinear dynamical models [11.917949887615567]
We propose DeepBayes estimators that leverage the power of deep recurrent neural networks in learning an estimator.
The deep recurrent neural network architectures can be trained offline and ensure significant time savings during inference.
We demonstrate the applicability of our proposed method on different example models and perform detailed comparisons with state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-04T18:12:17Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Neural Networks for Parameter Estimation in Intractable Models [0.0]
We show how to estimate parameters from max-stable processes, where inference is exceptionally challenging.
We use data from model simulations as input and train deep neural networks to learn statistical parameters.
arXiv Detail & Related papers (2021-07-29T21:59:48Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - Robust and integrative Bayesian neural networks for likelihood-free
parameter inference [0.0]
State-of-the-art neural network-based methods for learning summary statistics have delivered promising results for simulation-based likelihood-free parameter inference.
This work proposes a robust integrated approach that learns summary statistics using Bayesian neural networks, and directly estimates the posterior density using categorical distributions.
arXiv Detail & Related papers (2021-02-12T13:45:23Z) - Estimation of the Mean Function of Functional Data via Deep Neural
Networks [6.230751621285321]
We propose a deep neural network method to perform nonparametric regression for functional data.
The proposed method is applied to analyze positron emission tomography images of patients with Alzheimer disease.
arXiv Detail & Related papers (2020-12-08T17:18:16Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.