Nonlinear state-space identification using deep encoder networks
- URL: http://arxiv.org/abs/2012.07697v2
- Date: Wed, 28 Apr 2021 08:18:15 GMT
- Title: Nonlinear state-space identification using deep encoder networks
- Authors: Gerben Beintema, Roland Toth and Maarten Schoukens
- Abstract summary: This paper introduces a method that approximates the simulation loss by splitting the data set into independent sections to the multiple shooting method.
The main contribution is the introduction of an encoder function to estimate the initial state at the start each section.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Nonlinear state-space identification for dynamical systems is most often
performed by minimizing the simulation error to reduce the effect of model
errors. This optimization problem becomes computationally expensive for large
datasets. Moreover, the problem is also strongly non-convex, often leading to
sub-optimal parameter estimates. This paper introduces a method that
approximates the simulation loss by splitting the data set into multiple
independent sections similar to the multiple shooting method. This splitting
operation allows for the use of stochastic gradient optimization methods which
scale well with data set size and has a smoothing effect on the non-convex cost
function. The main contribution of this paper is the introduction of an encoder
function to estimate the initial state at the start of each section. The
encoder function estimates the initial states using a feed-forward neural
network starting from historical input and output samples. The efficiency and
performance of the proposed state-space encoder method is illustrated on two
well-known benchmarks where, for instance, the method achieves the lowest known
simulation error on the Wiener--Hammerstein benchmark.
Related papers
- Efficient Prior Calibration From Indirect Data [5.588334720483076]
This paper is concerned with learning the prior model from data, in particular, learning the prior from multiple realizations of indirect data obtained through the noisy observation process.
An efficient residual-based neural operator approximation of the forward model is proposed and it is shown that this may be learned concurrently with the pushforward map.
arXiv Detail & Related papers (2024-05-28T08:34:41Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Initialization Approach for Nonlinear State-Space Identification via the
Subspace Encoder Approach [0.0]
SUBNET has been developed to identify nonlinear state-space models from input-output data.
State encoder function is introduced to reconstruct the current state from past input-output data.
This paper focuses on an initialisation of the subspace encoder approach using the Best Linear Approximation (BLA)
arXiv Detail & Related papers (2023-04-04T20:57:34Z) - Deep Subspace Encoders for Nonlinear System Identification [0.0]
We propose a method which uses a truncated prediction loss and a subspace encoder for state estimation.
We show that, under mild conditions, the proposed method is locally consistent, increases optimization stability, and achieves increased data efficiency.
arXiv Detail & Related papers (2022-10-26T16:04:38Z) - Amortized Bayesian Inference of GISAXS Data with Normalizing Flows [0.10752246796855561]
We propose a simulation-based framework that combines variational auto-encoders and normalizing flows to estimate the posterior distribution of object parameters.
We demonstrate that our method reduces the inference cost by orders of magnitude while producing consistent results with ABC.
arXiv Detail & Related papers (2022-10-04T12:09:57Z) - An Accelerated Doubly Stochastic Gradient Method with Faster Explicit
Model Identification [97.28167655721766]
We propose a novel doubly accelerated gradient descent (ADSGD) method for sparsity regularized loss minimization problems.
We first prove that ADSGD can achieve a linear convergence rate and lower overall computational complexity.
arXiv Detail & Related papers (2022-08-11T22:27:22Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Data-driven Weight Initialization with Sylvester Solvers [72.11163104763071]
We propose a data-driven scheme to initialize the parameters of a deep neural network.
We show that our proposed method is especially effective in few-shot and fine-tuning settings.
arXiv Detail & Related papers (2021-05-02T07:33:16Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.