Iterated Block Particle Filter for High-dimensional Parameter Learning:
Beating the Curse of Dimensionality
- URL: http://arxiv.org/abs/2110.10745v4
- Date: Tue, 4 Apr 2023 14:36:09 GMT
- Title: Iterated Block Particle Filter for High-dimensional Parameter Learning:
Beating the Curse of Dimensionality
- Authors: Ning Ning and Edward L. Ionides
- Abstract summary: temporal disease learning for high-dimensional, partially observed, and nonlinear processes is a methodological challenge.
We propose the iterated block particle filter (IBPF) for learning high-dimensional inference parameters over graphical state space models.
- Score: 0.6599344783327054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameter learning for high-dimensional, partially observed, and nonlinear
stochastic processes is a methodological challenge. Spatiotemporal disease
transmission systems provide examples of such processes giving rise to open
inference problems. We propose the iterated block particle filter (IBPF)
algorithm for learning high-dimensional parameters over graphical state space
models with general state spaces, measures, transition densities and graph
structure. Theoretical performance guarantees are obtained on beating the curse
of dimensionality (COD), algorithm convergence, and likelihood maximization.
Experiments on a highly nonlinear and non-Gaussian spatiotemporal model for
measles transmission reveal that the iterated ensemble Kalman filter algorithm
(Li et al. (2020)) is ineffective and the iterated filtering algorithm (Ionides
et al. (2015)) suffers from the COD, while our IBPF algorithm beats COD
consistently across various experiments with different metrics.
Related papers
- Permutation Invariant Learning with High-Dimensional Particle Filters [8.878254892409005]
Sequential learning in deep models often suffers from challenges such as catastrophic forgetting and loss of plasticity.
We introduce a novel permutation-invariant learning framework based on high-dimensional particle filters.
arXiv Detail & Related papers (2024-10-30T05:06:55Z) - Massive Dimensions Reduction and Hybridization with Meta-heuristics in Deep Learning [0.24578723416255746]
Histogram-based Differential Evolution (HBDE) hybridizes gradient-based and gradient-free algorithms to optimize parameters.
HBDE outperforms baseline gradient-based and parent gradient-free DE algorithms evaluated on CIFAR-10 and CIFAR-100 datasets.
arXiv Detail & Related papers (2024-08-13T20:28:20Z) - A Mirror Descent-Based Algorithm for Corruption-Tolerant Distributed Gradient Descent [57.64826450787237]
We show how to analyze the behavior of distributed gradient descent algorithms in the presence of adversarial corruptions.
We show how to use ideas from (lazy) mirror descent to design a corruption-tolerant distributed optimization algorithm.
Experiments based on linear regression, support vector classification, and softmax classification on the MNIST dataset corroborate our theoretical findings.
arXiv Detail & Related papers (2024-07-19T08:29:12Z) - Accelerated Inference for Partially Observed Markov Processes using Automatic Differentiation [4.872049174955585]
Automatic differentiation (AD) has driven recent advances in machine learning.
We show how to embed two existing AD particle filter methods in a theoretical framework that provides an extension to a new class of algorithms.
We develop likelihood algorithms suited to the Monte Carlo properties of the AD gradient estimate.
arXiv Detail & Related papers (2024-07-03T13:06:46Z) - Closed-form Filtering for Non-linear Systems [83.91296397912218]
We propose a new class of filters based on Gaussian PSD Models, which offer several advantages in terms of density approximation and computational efficiency.
We show that filtering can be efficiently performed in closed form when transitions and observations are Gaussian PSD Models.
Our proposed estimator enjoys strong theoretical guarantees, with estimation error that depends on the quality of the approximation and is adaptive to the regularity of the transition probabilities.
arXiv Detail & Related papers (2024-02-15T08:51:49Z) - Nonlinear Filtering with Brenier Optimal Transport Maps [4.745059103971596]
This paper is concerned with the problem of nonlinear filtering, i.e., computing the conditional distribution of the state of a dynamical system.
Conventional sequential importance resampling (SIR) particle filters suffer from fundamental limitations, in scenarios involving degenerate likelihoods or high-dimensional states.
In this paper, we explore an alternative method, which is based on estimating the Brenier optimal transport (OT) map from the current prior distribution of the state to the posterior distribution at the next time step.
arXiv Detail & Related papers (2023-10-21T01:34:30Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Computational Doob's h-transforms for Online Filtering of Discretely
Observed Diffusions [65.74069050283998]
We propose a computational framework to approximate Doob's $h$-transforms.
The proposed approach can be orders of magnitude more efficient than state-of-the-art particle filters.
arXiv Detail & Related papers (2022-06-07T15:03:05Z) - Learning to Guide Random Search [111.71167792453473]
We consider derivative-free optimization of a high-dimensional function that lies on a latent low-dimensional manifold.
We develop an online learning approach that learns this manifold while performing the optimization.
We empirically evaluate the method on continuous optimization benchmarks and high-dimensional continuous control problems.
arXiv Detail & Related papers (2020-04-25T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.