Regret-Optimal Filtering
- URL: http://arxiv.org/abs/2101.10357v1
- Date: Mon, 25 Jan 2021 19:06:52 GMT
- Title: Regret-Optimal Filtering
- Authors: Oron Sabag, Babak Hassibi
- Abstract summary: We consider the problem of filtering in linear state-space models through the lens of regret optimization.
We formulate a novel criterion for filter design based on the concept of regret between the estimation error energy of a clairvoyant estimator.
We show that the regret-optimal estimator can be easily implemented by solving three Riccati equations and a single Lyapunov equation.
- Score: 57.51328978669528
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of filtering in linear state-space models (e.g., the
Kalman filter setting) through the lens of regret optimization. Different
assumptions on the driving disturbance and the observation noise sequences give
rise to different estimators: in the stochastic setting to the celebrated
Kalman filter, and in the deterministic setting of bounded energy disturbances
to $H_\infty$ estimators. In this work, we formulate a novel criterion for
filter design based on the concept of regret between the estimation error
energy of a clairvoyant estimator that has access to all future observations (a
so-called smoother) and a causal one that only has access to current and past
observations. The regret-optimal estimator is chosen to minimize this
worst-case difference across all bounded-energy noise sequences. The resulting
estimator is adaptive in the sense that it aims to mimic the behavior of the
clairvoyant estimator, irrespective of what the realization of the noise will
be and thus interpolates between the stochastic and deterministic approaches.
We provide a solution for the regret estimation problem at two different
levels. First, we provide a solution at the operator level by reducing it to
the Nehari problem. Second, for state-space models, we explicitly find the
estimator that achieves the optimal regret. From a computational perspective,
the regret-optimal estimator can be easily implemented by solving three Riccati
equations and a single Lyapunov equation. For a state-space model of dimension
$n$, the regret-optimal estimator has a state-space structure of dimension
$3n$. We demonstrate the applicability and efficacy of the estimator in a
variety of problems and observe that the estimator has average and worst-case
performances that are simultaneously close to their optimal values. We
therefore argue that regret-optimality is a viable approach to estimator
design.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Stochastic variance-reduced Gaussian variational inference on the Bures-Wasserstein manifold [4.229248343585333]
We propose a novel variance-reduced estimator for optimization in the Bures-Wasserstein space.
We show that this estimator has a smaller variance than the Monte-Carlo estimator in scenarios of interest.
We demonstrate that the proposed estimator gains order-of-magnitude improvements over the previous Bures-Wasserstein methods.
arXiv Detail & Related papers (2024-10-03T13:55:49Z) - Distributed Estimation and Inference for Semi-parametric Binary Response Models [8.309294338998539]
This paper studies the maximum score estimator of a semi-parametric binary choice model under a distributed computing environment.
An intuitive divide-and-conquer estimator is computationally expensive and restricted by a non-regular constraint on the number of machines.
arXiv Detail & Related papers (2022-10-15T23:06:46Z) - Wasserstein Distributionally Robust Estimation in High Dimensions:
Performance Analysis and Optimal Hyperparameter Tuning [0.0]
We propose a Wasserstein distributionally robust estimation framework to estimate an unknown parameter from noisy linear measurements.
We focus on the task of analyzing the squared error performance of such estimators.
We show that the squared error can be recovered as the solution of a convex-concave optimization problem.
arXiv Detail & Related papers (2022-06-27T13:02:59Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z) - Regret-optimal Estimation and Control [52.28457815067461]
We show that the regret-optimal estimator and regret-optimal controller can be derived in state-space form.
We propose regret-optimal analogs of Model-Predictive Control (MPC) and the Extended KalmanFilter (EKF) for systems with nonlinear dynamics.
arXiv Detail & Related papers (2021-06-22T23:14:21Z) - Optimal Algorithms for Stochastic Multi-Armed Bandits with Heavy Tailed
Rewards [24.983866845065926]
We consider multi-armed bandits with heavy-tailed rewards, whose $p$-th moment is bounded by a constant $nu_p$ for $1pleq2$.
We propose a novel robust estimator which does not require $nu_p$ as prior information.
We show that an error probability of the proposed estimator decays exponentially fast.
arXiv Detail & Related papers (2020-10-24T10:44:02Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z) - Support recovery and sup-norm convergence rates for sparse pivotal
estimation [79.13844065776928]
In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.
We show minimax sup-norm convergence rates for non smoothed and smoothed, single task and multitask square-root Lasso-type estimators.
arXiv Detail & Related papers (2020-01-15T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.