EINNs: Epidemiologically-Informed Neural Networks
- URL: http://arxiv.org/abs/2202.10446v1
- Date: Mon, 21 Feb 2022 18:59:03 GMT
- Title: EINNs: Epidemiologically-Informed Neural Networks
- Authors: Alexander Rodr\'iguez, Jiaming Cui, Naren Ramakrishnan, Bijaya
Adhikari, B. Aditya Prakash
- Abstract summary: We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
- Score: 75.34199997857341
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a new class of physics-informed neural networks-EINN-crafted for
epidemic forecasting. We investigate how to leverage both the theoretical
flexibility provided by mechanistic models as well as the data-driven
expressability afforded by AI models, to ingest heterogeneous information.
Although neural forecasting models has been successful in multiple tasks,
long-term predictions and anticipating trend changes remain open challenges.
Epidemiological ODE models contain mechanisms that can guide us in these two
tasks; however, they have limited capability of ingesting data sources and
modeling composite signals. Thus we propose to supervise neural networks with
epidemic mechanistic models while simultaneously learning their hidden
dynamics. Our method EINN allows neural models have the flexibility to learn
the disease spread dynamics and use auxiliary features in a general framework.
In contrast with previous work, we not assume the observability of complete
dynamics and do not need to numerically solve the ODE equations during
training. Our thorough experiments showcase the clear benefits of our approach
with other non-trivial alternatives.
Related papers
- BLEND: Behavior-guided Neural Population Dynamics Modeling via Privileged Knowledge Distillation [6.3559178227943764]
We propose BLEND, a behavior-guided neural population dynamics modeling framework via privileged knowledge distillation.
By considering behavior as privileged information, we train a teacher model that takes both behavior observations (privileged features) and neural activities (regular features) as inputs.
A student model is then distilled using only neural activity.
arXiv Detail & Related papers (2024-10-02T12:45:59Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Approaching epidemiological dynamics of COVID-19 with physics-informed
neural networks [23.95944607153291]
A physics-informed neural network (PINN) embedded with the susceptible-infected-removed (SIR) model is devised to understand the temporal evolution dynamics of infectious diseases.
The method is applied to COVID-19 data reported for Germany and shows that it can accurately identify and predict virus spread trends.
arXiv Detail & Related papers (2023-02-17T10:36:58Z) - Stretched and measured neural predictions of complex network dynamics [2.1024950052120417]
Data-driven approximations of differential equations present a promising alternative to traditional methods for uncovering a model of dynamical systems.
A recently employed machine learning tool for studying dynamics is neural networks, which can be used for data-driven solution finding or discovery of differential equations.
We show that extending the model's generalizability beyond traditional statistical learning theory limits is feasible.
arXiv Detail & Related papers (2023-01-12T09:44:59Z) - Bayesian Physics-Informed Neural Networks for real-world nonlinear
dynamical systems [0.0]
We integrate data, physics, and uncertainties by combining neural networks, physics-informed modeling, and Bayesian inference.
Our study reveals the inherent advantages and disadvantages of Neural Networks, Bayesian Inference, and a combination of both.
We anticipate that the underlying concepts and trends generalize to more complex disease conditions.
arXiv Detail & Related papers (2022-05-12T19:04:31Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Learning identifiable and interpretable latent models of
high-dimensional neural activity using pi-VAE [10.529943544385585]
We propose a method that integrates key ingredients from latent models and traditional neural encoding models.
Our method, pi-VAE, is inspired by recent progress on identifiable variational auto-encoder.
We validate pi-VAE using synthetic data, and apply it to analyze neurophysiological datasets from rat hippocampus and macaque motor cortex.
arXiv Detail & Related papers (2020-11-09T22:00:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.