Parameter Estimation for the SEIR Model Using Recurrent Nets
- URL: http://arxiv.org/abs/2105.14524v1
- Date: Sun, 30 May 2021 12:51:45 GMT
- Title: Parameter Estimation for the SEIR Model Using Recurrent Nets
- Authors: Chun Fan, Yuxian Meng, Xiaofei Sun, Fei Wu, Tianwei Zhang, Jiwei Li
- Abstract summary: We find the optimal $Theta_textSEIR$ based on the differentiable objective.
We observe that the proposed strategy leads to significantly better parameter estimations with a smaller number of simulations.
- Score: 28.065980818744208
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The standard way to estimate the parameters $\Theta_\text{SEIR}$ (e.g., the
transmission rate $\beta$) of an SEIR model is to use grid search, where
simulations are performed on each set of parameters, and the parameter set
leading to the least $L_2$ distance between predicted number of infections and
observed infections is selected. This brute-force strategy is not only time
consuming, as simulations are slow when the population is large, but also
inaccurate, since it is impossible to enumerate all parameter combinations. To
address these issues, in this paper, we propose to transform the
non-differentiable problem of finding optimal $\Theta_\text{SEIR}$ to a
differentiable one, where we first train a recurrent net to fit a small number
of simulation data. Next, based on this recurrent net that is able to
generalize SEIR simulations, we are able to transform the objective to a
differentiable one with respect to $\Theta_\text{SEIR}$, and straightforwardly
obtain its optimal value. The proposed strategy is both time efficient as it
only relies on a small number of SEIR simulations, and accurate as we are able
to find the optimal $\Theta_\text{SEIR}$ based on the differentiable objective.
On two COVID-19 datasets, we observe that the proposed strategy leads to
significantly better parameter estimations with a smaller number of
simulations.
Related papers
- Fast and scalable Wasserstein-1 neural optimal transport solver for single-cell perturbation prediction [55.89763969583124]
Optimal transport theory provides a principled framework for constructing such mappings.
We propose a novel optimal transport solver based on Wasserstein-1.
Our experiments demonstrate that the proposed solver can mimic the $W$ OT solvers in finding a unique and monotonic" map on 2D datasets.
arXiv Detail & Related papers (2024-11-01T14:23:19Z) - Truncating Trajectories in Monte Carlo Policy Evaluation: an Adaptive Approach [51.76826149868971]
Policy evaluation via Monte Carlo simulation is at the core of many MC Reinforcement Learning (RL) algorithms.
We propose as a quality index a surrogate of the mean squared error of a return estimator that uses trajectories of different lengths.
We present an adaptive algorithm called Robust and Iterative Data collection strategy Optimization (RIDO)
arXiv Detail & Related papers (2024-10-17T11:47:56Z) - A Specialized Semismooth Newton Method for Kernel-Based Optimal
Transport [92.96250725599958]
Kernel-based optimal transport (OT) estimators offer an alternative, functional estimation procedure to address OT problems from samples.
We show that our SSN method achieves a global convergence rate of $O (1/sqrtk)$, and a local quadratic convergence rate under standard regularity conditions.
arXiv Detail & Related papers (2023-10-21T18:48:45Z) - Globally Convergent Accelerated Algorithms for Multilinear Sparse
Logistic Regression with $\ell_0$-constraints [2.323238724742687]
Multilinear logistic regression serves as a powerful tool for the analysis of multidimensional data.
We propose an Accelerated Proximal Alternating Minim-MLSR model to solve the $ell_0$-MLSR.
We also demonstrate that APALM$+$ is globally convergent to a first-order critical point as well as to establish convergence by using the Kurdy-Lojasiewicz property.
arXiv Detail & Related papers (2023-09-17T11:05:08Z) - KL-Entropy-Regularized RL with a Generative Model is Minimax Optimal [70.15267479220691]
We consider and analyze the sample complexity of model reinforcement learning with a generative variance-free model.
Our analysis shows that it is nearly minimax-optimal for finding an $varepsilon$-optimal policy when $varepsilon$ is sufficiently small.
arXiv Detail & Related papers (2022-05-27T19:39:24Z) - Bayesian Target-Vector Optimization for Efficient Parameter
Reconstruction [0.0]
We introduce a target-vector optimization scheme that considers all $K$ contributions of the model function and that is specifically suited for parameter reconstruction problems.
It also enables to determine accurate uncertainty estimates with very few observations of the actual model function.
arXiv Detail & Related papers (2022-02-23T15:13:32Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Improved Prediction and Network Estimation Using the Monotone Single
Index Multi-variate Autoregressive Model [34.529641317832024]
We develop a semi-parametric approach based on the monotone single-index multi-variate autoregressive model (SIMAM)
We provide theoretical guarantees for dependent data and an alternating projected gradient descent algorithm.
We demonstrate the superior performance both on simulated data and two real data examples.
arXiv Detail & Related papers (2021-06-28T12:32:29Z) - Minimum discrepancy principle strategy for choosing $k$ in $k$-NN regression [2.0411082897313984]
We present a novel data-driven strategy to choose the hyper parameter $k$ in the $k$-NN regression estimator without using any hold-out data.
We propose using an easily implemented in practice strategy based on the idea of early stopping and the minimum discrepancy principle.
arXiv Detail & Related papers (2020-08-20T00:13:19Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.