Deep subspace encoders for continuous-time state-space identification
- URL: http://arxiv.org/abs/2204.09405v1
- Date: Wed, 20 Apr 2022 11:55:17 GMT
- Title: Deep subspace encoders for continuous-time state-space identification
- Authors: Gerben Izaak Beintema, Maarten Schoukens and Roland T\'oth
- Abstract summary: Continuous-time (CT) models have shown an improved sample efficiency during learning.
The multifaceted CT state-space model identification problem remains to be solved in full.
This paper presents a novel estimation method that includes these aspects and that is able to obtain state-of-the-art results.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continuous-time (CT) models have shown an improved sample efficiency during
learning and enable ODE analysis methods for enhanced interpretability compared
to discrete-time (DT) models. Even with numerous recent developments, the
multifaceted CT state-space model identification problem remains to be solved
in full, considering common experimental aspects such as the presence of
external inputs, measurement noise, and latent states. This paper presents a
novel estimation method that includes these aspects and that is able to obtain
state-of-the-art results on multiple benchmarks where a small fully connected
neural network describes the CT dynamics. The novel estimation method called
the subspace encoder approach ascertains these results by altering the
well-known simulation loss to include short subsections instead, by using an
encoder function and a state-derivative normalization term to obtain a
computationally feasible and stable optimization problem. This encoder function
estimates the initial states of each considered subsection. We prove that the
existence of the encoder function has the necessary condition of a Lipschitz
continuous state-derivative utilizing established properties of ODEs.
Related papers
- DiffuSeq-v2: Bridging Discrete and Continuous Text Spaces for
Accelerated Seq2Seq Diffusion Models [58.450152413700586]
We introduce a soft absorbing state that facilitates the diffusion model in learning to reconstruct discrete mutations based on the underlying Gaussian space.
We employ state-of-the-art ODE solvers within the continuous space to expedite the sampling process.
Our proposed method effectively accelerates the training convergence by 4x and generates samples of similar quality 800x faster.
arXiv Detail & Related papers (2023-10-09T15:29:10Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - Validation Diagnostics for SBI algorithms based on Normalizing Flows [55.41644538483948]
This work proposes easy to interpret validation diagnostics for multi-dimensional conditional (posterior) density estimators based on NF.
It also offers theoretical guarantees based on results of local consistency.
This work should help the design of better specified models or drive the development of novel SBI-algorithms.
arXiv Detail & Related papers (2022-11-17T15:48:06Z) - Deep Subspace Encoders for Nonlinear System Identification [0.0]
We propose a method which uses a truncated prediction loss and a subspace encoder for state estimation.
We show that, under mild conditions, the proposed method is locally consistent, increases optimization stability, and achieves increased data efficiency.
arXiv Detail & Related papers (2022-10-26T16:04:38Z) - Online Time Series Anomaly Detection with State Space Gaussian Processes [12.483273106706623]
R-ssGPFA is an unsupervised online anomaly detection model for uni- and multivariate time series.
For high-dimensional time series, we propose an extension of Gaussian process factor analysis to identify the common latent processes of the time series.
Our model's robustness is improved by using a simple to skip Kalman updates when encountering anomalous observations.
arXiv Detail & Related papers (2022-01-18T06:43:32Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Long-time integration of parametric evolution equations with
physics-informed DeepONets [0.0]
We introduce an effective framework for learning infinite-dimensional operators that map random initial conditions to associated PDE solutions within a short time interval.
Global long-time predictions across a range of initial conditions can be then obtained by iteratively evaluating the trained model.
This introduces a new approach to temporal domain decomposition that is shown to be effective in performing accurate long-time simulations.
arXiv Detail & Related papers (2021-06-09T20:46:17Z) - Semi-supervised deep learning for high-dimensional uncertainty
quantification [6.910275451003041]
This paper presents a semi-supervised learning framework for dimension reduction and reliability analysis.
An autoencoder is first adopted for mapping the high-dimensional space into a low-dimensional latent space.
A deep feedforward neural network is utilized to learn the mapping relationship and reconstruct the latent space.
arXiv Detail & Related papers (2020-06-01T15:15:42Z) - Convergence and sample complexity of gradient methods for the model-free
linear quadratic regulator problem [27.09339991866556]
We show that ODE searches for optimal control for an unknown computation system by directly searching over the corresponding space of controllers.
We take a step towards demystifying the performance and efficiency of such methods by focusing on the gradient-flow dynamics set of stabilizing feedback gains and a similar result holds for the forward disctization of the ODE.
arXiv Detail & Related papers (2019-12-26T16:56:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.