SLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term
Memory
- URL: http://arxiv.org/abs/2010.05899v1
- Date: Mon, 12 Oct 2020 17:50:21 GMT
- Title: SLIP: Learning to Predict in Unknown Dynamical Systems with Long-Term
Memory
- Authors: Paria Rashidinejad, Jiantao Jiao, Stuart Russell
- Abstract summary: We present an efficient and practical (polynomial time) algorithm for online prediction in unknown and partially observed linear dynamical systems.
Our algorithm competes with Kalman filter in hindsight with only logarithmic regret.
Our theoretical and experimental results shed light on the conditions required for efficient probably approximately correct (PAC) learning of the Kalman filter from partially observed data.
- Score: 21.09861411069719
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an efficient and practical (polynomial time) algorithm for online
prediction in unknown and partially observed linear dynamical systems (LDS)
under stochastic noise. When the system parameters are known, the optimal
linear predictor is the Kalman filter. However, the performance of existing
predictive models is poor in important classes of LDS that are only marginally
stable and exhibit long-term forecast memory. We tackle this problem through
bounding the generalized Kolmogorov width of the Kalman filter model by
spectral methods and conducting tight convex relaxation. We provide a
finite-sample analysis, showing that our algorithm competes with Kalman filter
in hindsight with only logarithmic regret. Our regret analysis relies on
Mendelson's small-ball method, providing sharp error bounds without
concentration, boundedness, or exponential forgetting assumptions. We also give
experimental results demonstrating that our algorithm outperforms
state-of-the-art methods. Our theoretical and experimental results shed light
on the conditions required for efficient probably approximately correct (PAC)
learning of the Kalman filter from partially observed data.
Related papers
- Selective Inference for Time-Varying Effect Moderation [3.8233569758620063]
Causal effect moderation investigates how the effect of interventions (or treatments) on outcome variables changes based on observed characteristics of individuals.
High-dimensional analyses often lack interpretability, with important moderators masked by noise.
We propose a two-step method for selective inference on time-varying causal effect moderation.
arXiv Detail & Related papers (2024-11-24T16:37:48Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Deep Learning for the Benes Filter [91.3755431537592]
We present a new numerical method based on the mesh-free neural network representation of the density of the solution of the Benes model.
We discuss the role of nonlinearity in the filtering model equations for the choice of the domain of the neural network.
arXiv Detail & Related papers (2022-03-09T14:08:38Z) - On the Benefits of Large Learning Rates for Kernel Methods [110.03020563291788]
We show that a phenomenon can be precisely characterized in the context of kernel methods.
We consider the minimization of a quadratic objective in a separable Hilbert space, and show that with early stopping, the choice of learning rate influences the spectral decomposition of the obtained solution.
arXiv Detail & Related papers (2022-02-28T13:01:04Z) - Predicting Flat-Fading Channels via Meta-Learned Closed-Form Linear
Filters and Equilibrium Propagation [38.42468500092177]
Predicting fading channels is a classical problem with a vast array of applications.
In practice, the Doppler spectrum is unknown, and the predictor has only access to a limited time series of estimated channels.
This paper proposes to leverage meta-learning in order to mitigate the requirements in terms of training data for channel fading prediction.
arXiv Detail & Related papers (2021-10-01T14:00:23Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - Deep learning: a statistical viewpoint [120.94133818355645]
Deep learning has revealed some major surprises from a theoretical perspective.
In particular, simple gradient methods easily find near-perfect solutions to non-optimal training problems.
We conjecture that specific principles underlie these phenomena.
arXiv Detail & Related papers (2021-03-16T16:26:36Z) - Online Learning of the Kalman Filter with Logarithmic Regret [2.0305676256390934]
We show that it is possible to achieve a regret of the order of $mathrmpolylog(N)$ with high probability, where $N$ is the number of observations collected.
This is achieved using an online least-squares algorithm, which exploits the approximately linear relation between future observations and past observations.
arXiv Detail & Related papers (2020-02-12T18:31:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.