Nonlinear Discrete-time Systems' Identification without Persistence of
Excitation: A Finite-time Concurrent Learning
- URL: http://arxiv.org/abs/2112.07765v1
- Date: Tue, 14 Dec 2021 22:19:20 GMT
- Title: Nonlinear Discrete-time Systems' Identification without Persistence of
Excitation: A Finite-time Concurrent Learning
- Authors: Farzaneh Tatari, Chiristos Panayiotou, Marios Polycarpou
- Abstract summary: A finite-time concurrent learning approach is presented to approximate the uncertainties of the discrete-time nonlinear systems.
Rigorous proofs guarantee the finite-time convergence of the estimated parameters to their optimal values.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper deals with the problem of finite-time learning for unknown
discrete-time nonlinear systems' dynamics, without the requirement of the
persistence of excitation. A finite-time concurrent learning approach is
presented to approximate the uncertainties of the discrete-time nonlinear
systems in an on-line fashion by employing current data along with recorded
experienced data satisfying an easy-to-check rank condition on the richness of
the recorded data which is less restrictive in comparison with persistence of
excitation condition. Rigorous proofs guarantee the finite-time convergence of
the estimated parameters to their optimal values based on a discrete-time
Lyapunov analysis. Compared with the existing work in the literature,
simulation results illustrate that the proposed method can timely and precisely
approximate the uncertainties.
Related papers
- A projected nonlinear state-space model for forecasting time series
signals [0.6537685198688538]
We propose a fast algorithm to learn and forecast nonlinear dynamics from noisy time series data.
A key feature of the proposed model is kernel functions applied to projected lines, enabling fast capture of nonlinearities in the latent dynamics.
arXiv Detail & Related papers (2023-11-22T09:05:37Z) - Probabilistic Learning of Multivariate Time Series with Temporal
Irregularity [25.91078012394032]
temporal irregularities, including nonuniform time intervals and component misalignment.
We develop a conditional flow representation to non-parametrically represent the data distribution, which is typically non-Gaussian.
The broad applicability and superiority of the proposed solution are confirmed by comparing it with existing approaches through ablation studies and testing on real-world datasets.
arXiv Detail & Related papers (2023-06-15T14:08:48Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Bayesian Spline Learning for Equation Discovery of Nonlinear Dynamics
with Quantified Uncertainty [8.815974147041048]
We develop a novel framework to identify parsimonious governing equations of nonlinear (spatiotemporal) dynamics from sparse, noisy data with quantified uncertainty.
The proposed algorithm is evaluated on multiple nonlinear dynamical systems governed by canonical ordinary and partial differential equations.
arXiv Detail & Related papers (2022-10-14T20:37:36Z) - Continuous-Time Modeling of Counterfactual Outcomes Using Neural
Controlled Differential Equations [84.42837346400151]
Estimating counterfactual outcomes over time has the potential to unlock personalized healthcare.
Existing causal inference approaches consider regular, discrete-time intervals between observations and treatment decisions.
We propose a controllable simulation environment based on a model of tumor growth for a range of scenarios.
arXiv Detail & Related papers (2022-06-16T17:15:15Z) - Pessimistic Q-Learning for Offline Reinforcement Learning: Towards
Optimal Sample Complexity [51.476337785345436]
We study a pessimistic variant of Q-learning in the context of finite-horizon Markov decision processes.
A variance-reduced pessimistic Q-learning algorithm is proposed to achieve near-optimal sample complexity.
arXiv Detail & Related papers (2022-02-28T15:39:36Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Uncertainty-Aware Multiple Instance Learning fromLarge-Scale Long Time
Series Data [20.2087807816461]
This paper proposes an uncertainty-aware multiple instance (MIL) framework to identify the most relevant periodautomatically.
We further incorporate another modality toaccommodate unreliable predictions by training a separate model and conduct uncertainty aware fusion.
Empirical resultsdemonstrate that the proposed method can effectively detect thetypes of vessels based on the trajectory.
arXiv Detail & Related papers (2021-11-16T17:09:02Z) - Concurrent Learning Based Tracking Control of Nonlinear Systems using
Gaussian Process [2.7930955543692817]
This paper demonstrates the applicability of the combination of concurrent learning as a tool for parameter estimation and non-parametric Gaussian Process for online disturbance learning.
A control law is developed by using both techniques sequentially in the context of feedback linearization.
The closed-loop system stability for the nth-order system is proven using the Lyapunov stability theorem.
arXiv Detail & Related papers (2021-06-02T02:59:48Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.