Identification and Adaptation with Binary-Valued Observations under
Non-Persistent Excitation Condition
- URL: http://arxiv.org/abs/2107.03588v1
- Date: Thu, 8 Jul 2021 03:57:50 GMT
- Title: Identification and Adaptation with Binary-Valued Observations under
Non-Persistent Excitation Condition
- Authors: Lantian Zhang, Yanlong Zhao, Lei Guo
- Abstract summary: We propose an online projected Quasi-Newton type algorithm for estimation of parameter estimation of regression models with binary-valued observations.
We establish the strong consistency of the estimation algorithm and provide the convergence rate.
Convergence of adaptive predictors and their applications in adaptive control are also discussed.
- Score: 1.6897716547971817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamical systems with binary-valued observations are widely used in
information industry, technology of biological pharmacy and other fields.
Though there have been much efforts devoted to the identification of such
systems, most of the previous investigations are based on first-order gradient
algorithm which usually has much slower convergence rate than the Quasi-Newton
algorithm. Moreover, persistence of excitation(PE) conditions are usually
required to guarantee consistent parameter estimates in the existing
literature, which are hard to be verified or guaranteed for feedback control
systems. In this paper, we propose an online projected Quasi-Newton type
algorithm for parameter estimation of stochastic regression models with
binary-valued observations and varying thresholds. By using both the stochastic
Lyapunov function and martingale estimation methods, we establish the strong
consistency of the estimation algorithm and provide the convergence rate, under
a signal condition which is considerably weaker than the traditional PE
condition and coincides with the weakest possible excitation known for the
classical least square algorithm of stochastic regression models. Convergence
of adaptive predictors and their applications in adaptive control are also
discussed.
Related papers
- PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates [17.777466668123886]
We introduce PROMISE ($textbfPr$econditioned $textbfO$ptimization $textbfM$ethods by $textbfI$ncorporating $textbfS$calable Curvature $textbfE$stimates), a suite of sketching-based preconditioned gradient algorithms.
PROMISE includes preconditioned versions of SVRG, SAGA, and Katyusha.
arXiv Detail & Related papers (2023-09-05T07:49:10Z) - Approximate Message Passing for the Matrix Tensor Product Model [8.206394018475708]
We propose and analyze an approximate message passing (AMP) algorithm for the matrix tensor product model.
Building upon an convergence theorem for non-separable functions, we prove a state evolution for non-separable functions.
We leverage this state evolution result to provide necessary and sufficient conditions for recovery of the signal of interest.
arXiv Detail & Related papers (2023-06-27T16:03:56Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Stochastic Natural Thresholding Algorithms [18.131412357510158]
Natural Thresholding (NT) has been proposed with improved computational efficiency.
This paper proposes convergence guarantees for natural thresholding algorithms by extending the deterministic version with linear measurements.
arXiv Detail & Related papers (2023-06-07T18:49:19Z) - Gaussian Processes with State-Dependent Noise for Stochastic Control [2.842794675894731]
The residual model uncertainty of a dynamical system is learned using a Gaussian Process (GP)
The two GPs are interdependent and are thus learned jointly using an iterative algorithm.
arXiv Detail & Related papers (2023-05-25T16:36:57Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - A Priori Denoising Strategies for Sparse Identification of Nonlinear
Dynamical Systems: A Comparative Study [68.8204255655161]
We investigate and compare the performance of several local and global smoothing techniques to a priori denoise the state measurements.
We show that, in general, global methods, which use the entire measurement data set, outperform local methods, which employ a neighboring data subset around a local point.
arXiv Detail & Related papers (2022-01-29T23:31:25Z) - Riemannian classification of EEG signals with missing values [67.90148548467762]
This paper proposes two strategies to handle missing data for the classification of electroencephalograms.
The first approach estimates the covariance from imputed data with the $k$-nearest neighbors algorithm; the second relies on the observed data by leveraging the observed-data likelihood within an expectation-maximization algorithm.
As results show, the proposed strategies perform better than the classification based on observed data and allow to keep a high accuracy even when the missing data ratio increases.
arXiv Detail & Related papers (2021-10-19T14:24:50Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Statistical optimality and stability of tangent transform algorithms in
logit models [6.9827388859232045]
We provide conditions on the data generating process to derive non-asymptotic upper bounds to the risk incurred by the logistical optima.
In particular, we establish local variation of the algorithm without any assumptions on the data-generating process.
We explore a special case involving a semi-orthogonal design under which a global convergence is obtained.
arXiv Detail & Related papers (2020-10-25T05:15:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.