Asymptotically efficient adaptive identification under saturated output observation
- URL: http://arxiv.org/abs/2309.09454v3
- Date: Fri, 04 Apr 2025 14:29:51 GMT
- Title: Asymptotically efficient adaptive identification under saturated output observation
- Authors: Lantian Zhang, Lei Guo,
- Abstract summary: We introduce a new adaptive Newton-type algorithm on the negative log-likelihood of partially observed samples.<n>We show that the mean square error of the estimates can achieve the Cramer-Rao boundally without resorting to i.i.d data assumptions.
- Score: 1.9000124079328826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As saturated output observations are ubiquitous in practice, identifying stochastic systems with such nonlinear observations is a fundamental problem across various fields. This paper investigates the asymptotically efficient identification problem for stochastic dynamical systems with saturated output observations. In contrast to most of the existing results, our results do not need the commonly used but stringent conditions such as periodic or independent assumptions on the system signals, and thus do not exclude applications to stochastic feedback systems. To be specific, we introduce a new adaptive Newton-type algorithm on the negative log-likelihood of the partially observed samples using a two-step design technique. Under some general excitation data conditions, we show that the parameter estimate is strongly consistent and asymptotically normal by employing the stochastic Lyapunov function method and limit theories for martingales. Furthermore, we show that the mean square error of the estimates can achieve the Cramer-Rao bound asymptotically without resorting to i.i.d data assumptions. This indicates that the performance of the proposed algorithm is the best possible that one can expect in general. A numerical example is provided to illustrate the superiority of our new adaptive algorithm over the existing related ones in the literature.
Related papers
- Asymptotically Optimal Change Detection for Unnormalized Pre- and Post-Change Distributions [65.38208224389027]
This paper addresses the problem of detecting changes when only unnormalized pre- and post-change distributions are accessible.
Our approach is based on the estimation of the Cumulative Sum statistics, which is known to produce optimal performance.
arXiv Detail & Related papers (2024-10-18T17:13:29Z) - Adaptive Linear Estimating Equations [5.985204759362746]
In this paper, we propose a general method for constructing debiased estimator.
It makes use of the idea of adaptive linear estimating equations, and we establish theoretical guarantees of normality.
A salient feature of our estimator is that in the context of multi-armed bandits, our estimator retains the non-asymptotic performance.
arXiv Detail & Related papers (2023-07-14T12:55:47Z) - Instance-Optimality in Interactive Decision Making: Toward a
Non-Asymptotic Theory [30.061707627742766]
We aim for instance-optimality, a strong notion of adaptivity which asserts that, on any particular problem instance, the algorithm under consideration outperforms all consistent algorithms.
In this paper, we take the first step toward developing a non-asymptotic theory of instance-optimal decision making with general function approximation.
arXiv Detail & Related papers (2023-04-24T21:51:58Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Asymptotic consistency of the WSINDy algorithm in the limit of continuum
data [0.0]
We study the consistency of the weak-form sparse identification of nonlinear dynamics algorithm (WSINDy)
We provide a mathematically rigorous explanation for the observed robustness to noise of weak-form equation learning.
arXiv Detail & Related papers (2022-11-29T07:49:34Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - Off-policy estimation of linear functionals: Non-asymptotic theory for
semi-parametric efficiency [59.48096489854697]
The problem of estimating a linear functional based on observational data is canonical in both the causal inference and bandit literatures.
We prove non-asymptotic upper bounds on the mean-squared error of such procedures.
We establish its instance-dependent optimality in finite samples via matching non-asymptotic local minimax lower bounds.
arXiv Detail & Related papers (2022-09-26T23:50:55Z) - Making Linear MDPs Practical via Contrastive Representation Learning [101.75885788118131]
It is common to address the curse of dimensionality in Markov decision processes (MDPs) by exploiting low-rank representations.
We consider an alternative definition of linear MDPs that automatically ensures normalization while allowing efficient representation learning.
We demonstrate superior performance over existing state-of-the-art model-based and model-free algorithms on several benchmarks.
arXiv Detail & Related papers (2022-07-14T18:18:02Z) - Pessimistic Q-Learning for Offline Reinforcement Learning: Towards
Optimal Sample Complexity [51.476337785345436]
We study a pessimistic variant of Q-learning in the context of finite-horizon Markov decision processes.
A variance-reduced pessimistic Q-learning algorithm is proposed to achieve near-optimal sample complexity.
arXiv Detail & Related papers (2022-02-28T15:39:36Z) - Explainable Deep Few-shot Anomaly Detection with Deviation Networks [123.46611927225963]
We introduce a novel weakly-supervised anomaly detection framework to train detection models.
The proposed approach learns discriminative normality by leveraging the labeled anomalies and a prior probability.
Our model is substantially more sample-efficient and robust, and performs significantly better than state-of-the-art competing methods in both closed-set and open-set settings.
arXiv Detail & Related papers (2021-08-01T14:33:17Z) - Non-asymptotic estimates for TUSLA algorithm for non-convex learning
with applications to neural networks with ReLU activation function [3.5044892799305956]
We provide a non-asymptotic analysis for the tamed un-adjusted Langevin algorithm (TUSLA) introduced in Lovas et al.
In particular, we establish non-asymptotic error bounds for the TUSLA algorithm in Wassersteinstein-1-2.
We show that the TUSLA algorithm converges rapidly to the optimal solution.
arXiv Detail & Related papers (2021-07-19T07:13:02Z) - Identification and Adaptation with Binary-Valued Observations under
Non-Persistent Excitation Condition [1.6897716547971817]
We propose an online projected Quasi-Newton type algorithm for estimation of parameter estimation of regression models with binary-valued observations.
We establish the strong consistency of the estimation algorithm and provide the convergence rate.
Convergence of adaptive predictors and their applications in adaptive control are also discussed.
arXiv Detail & Related papers (2021-07-08T03:57:50Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Benchmarking Simulation-Based Inference [5.3898004059026325]
Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods.
We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms.
We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency.
arXiv Detail & Related papers (2021-01-12T18:31:22Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z) - Statistical optimality and stability of tangent transform algorithms in
logit models [6.9827388859232045]
We provide conditions on the data generating process to derive non-asymptotic upper bounds to the risk incurred by the logistical optima.
In particular, we establish local variation of the algorithm without any assumptions on the data-generating process.
We explore a special case involving a semi-orthogonal design under which a global convergence is obtained.
arXiv Detail & Related papers (2020-10-25T05:15:13Z) - An Analysis of Constant Step Size SGD in the Non-convex Regime:
Asymptotic Normality and Bias [17.199063087458907]
Structured non- learning problems, for which critical points have favorable statistical properties, arise frequently in statistical machine learning.
We show that the SGD algorithm is widely used in practice.
arXiv Detail & Related papers (2020-06-14T13:58:44Z) - Optimizing for the Future in Non-Stationary MDPs [52.373873622008944]
We present a policy gradient algorithm that maximizes a forecast of future performance.
We show that our algorithm, called Prognosticator, is more robust to non-stationarity than two online adaptation techniques.
arXiv Detail & Related papers (2020-05-17T03:41:19Z) - Gaussian Variational State Estimation for Nonlinear State-Space Models [0.3222802562733786]
We consider the problem of state estimation, in the context of both filtering and smoothing, for nonlinear state-space models.
We develop an assumed Gaussian solution based on variational inference, which offers the key advantage of a flexible, but principled, mechanism for approxing the required distributions.
arXiv Detail & Related papers (2020-02-07T04:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.