DeepEpiSolver: Unravelling Inverse problems in Covid, HIV, Ebola and
Disease Transmission
- URL: http://arxiv.org/abs/2303.14194v1
- Date: Fri, 24 Mar 2023 11:47:16 GMT
- Title: DeepEpiSolver: Unravelling Inverse problems in Covid, HIV, Ebola and
Disease Transmission
- Authors: Ritam Majumdar, Shirish Karande, Lovekesh Vig
- Abstract summary: We use a neural network to learn the mapping between spread trajectories and coefficients of SIDR in an offline manner.
We observe a speed-up of 3-4 orders of magnitude with accuracy comparable to that of PINNs for 11 highly infectious diseases.
- Score: 15.199209463685706
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The spread of many infectious diseases is modeled using variants of the SIR
compartmental model, which is a coupled differential equation. The coefficients
of the SIR model determine the spread trajectories of disease, on whose basis
proactive measures can be taken. Hence, the coefficient estimates must be both
fast and accurate. Shaier et al. in the paper "Disease Informed Neural
Networks" used Physics Informed Neural Networks (PINNs) to estimate the
parameters of the SIR model. There are two drawbacks to this approach. First,
the training time for PINNs is high, with certain diseases taking close to 90
hrs to train. Second, PINNs don't generalize for a new SIDR trajectory, and
learning its corresponding SIR parameters requires retraining the PINN from
scratch. In this work, we aim to eliminate both of these drawbacks. We generate
a dataset between the parameters of ODE and the spread trajectories by solving
the forward problem for a large distribution of parameters using the LSODA
algorithm. We then use a neural network to learn the mapping between spread
trajectories and coefficients of SIDR in an offline manner. This allows us to
learn the parameters of a new spread trajectory without having to retrain,
enabling generalization at test time. We observe a speed-up of 3-4 orders of
magnitude with accuracy comparable to that of PINNs for 11 highly infectious
diseases. Further finetuning of neural network inferred ODE coefficients using
PINN further leads to 2-3 orders improvement of estimated coefficients.
Related papers
- Discovering Long-Term Effects on Parameter Efficient Fine-tuning [36.83255498301937]
Pre-trained Artificial Neural Networks (Annns) exhibit robust pattern recognition capabilities.
Annns and BNNs share extensive similarities with the human brain, specifically Biological Neural Networks (BNNs)
Annns can acquire new knowledge through fine-tuning.
arXiv Detail & Related papers (2024-08-24T03:27:29Z) - Sparsifying Bayesian neural networks with latent binary variables and
normalizing flows [10.865434331546126]
We will consider two extensions to the latent binary Bayesian neural networks (LBBNN) method.
Firstly, by using the local reparametrization trick (LRT) to sample the hidden units directly, we get a more computationally efficient algorithm.
More importantly, by using normalizing flows on the variational posterior distribution of the LBBNN parameters, the network learns a more flexible variational posterior distribution than the mean field Gaussian.
arXiv Detail & Related papers (2023-05-05T09:40:28Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Study of Drug Assimilation in Human System using Physics Informed Neural
Networks [0.0]
We study two mathematical models of a drug assimilation in the human system using Physics Informed Neural Networks (PINNs)
The resulting differential equations are solved using PINN, where we employ a feed forward multilayer perceptron as function approxor and the network parameters are tuned for minimum error.
We have employed DeepXDE, a python library for PINNs, to solve the simultaneous first order differential equations describing the two models of drug assimilation.
arXiv Detail & Related papers (2021-10-08T07:46:46Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Multi-fidelity Bayesian Neural Networks: Algorithms and Applications [0.0]
We propose a new class of Bayesian neural networks (BNNs) that can be trained using noisy data of variable fidelity.
We apply them to learn function approximations as well as to solve inverse problems based on partial differential equations (PDEs)
arXiv Detail & Related papers (2020-12-19T02:03:53Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.