An Improved Mathematical Model of Sepsis: Modeling, Bifurcation
Analysis, and Optimal Control Study for Complex Nonlinear Infectious Disease
System
- URL: http://arxiv.org/abs/2201.02702v1
- Date: Fri, 7 Jan 2022 22:51:11 GMT
- Title: An Improved Mathematical Model of Sepsis: Modeling, Bifurcation
Analysis, and Optimal Control Study for Complex Nonlinear Infectious Disease
System
- Authors: Yuyang Chen, Kaiming Bi, Chih-Hang J. Wu, David Ben-Arieh, Ashesh
Sinha
- Abstract summary: Sepsis is a life-threatening medical emergency, which is a major cause of death worldwide and the second highest cause of mortality in the United States.
Researching the optimal control treatment or intervention strategy on the comprehensive sepsis system is key in reducing mortality.
- Score: 1.5119440099674917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sepsis is a life-threatening medical emergency, which is a major cause of
death worldwide and the second highest cause of mortality in the United States.
Researching the optimal control treatment or intervention strategy on the
comprehensive sepsis system is key in reducing mortality. For this purpose,
first, this paper improves a complex nonlinear sepsis model proposed in our
previous work. Then, bifurcation analyses are conducted for each sepsis
subsystem to study the model behaviors under some system parameters. The
bifurcation analysis results also further indicate the necessity of control
treatment and intervention therapy. If the sepsis system is without adding any
control under some parameter and initial system value settings, the system will
perform persistent inflammation outcomes as time goes by. Therefore, we develop
our complex improved nonlinear sepsis model into a sepsis optimal control
model, and then use some effective biomarkers recommended in existing clinic
practices as optimization objective function to measure the development of
sepsis. Besides that, a Bayesian optimization algorithm by combining Recurrent
neural network (RNN-BO algorithm) is introduced to predict the optimal control
strategy for the studied sepsis optimal control system. The difference between
the RNN-BO algorithm from other optimization algorithms is that once given any
new initial system value setting (initial value is associated with the initial
conditions of patients), the RNN-BO algorithm is capable of quickly predicting
a corresponding time-series optimal control based on the historical optimal
control data for any new sepsis patient. To demonstrate the effectiveness and
efficiency of the RNN-BO algorithm on solving the optimal control solution on
the complex nonlinear sepsis system, some numerical simulations are implemented
by comparing with other optimization algorithms in this paper.
Related papers
- Sub-linear Regret in Adaptive Model Predictive Control [56.705978425244496]
We present STT-MPC (Self-Tuning Tube-based Model Predictive Control), an online oracle that combines the certainty-equivalence principle and polytopic tubes.
We analyze the regret of the algorithm, when compared to an algorithm initially aware of the system dynamics.
arXiv Detail & Related papers (2023-10-07T15:07:10Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Federated Offline Reinforcement Learning [55.326673977320574]
We propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites.
We design the first federated policy optimization algorithm for offline RL with sample complexity.
We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed.
arXiv Detail & Related papers (2022-06-11T18:03:26Z) - High-dimensional Bayesian Optimization Algorithm with Recurrent Neural
Network for Disease Control Models in Time Series [1.9371782627708491]
We propose a new high dimensional Bayesian Optimization algorithm combining Recurrent neural networks.
The proposed RNN-BO algorithm can solve the optimal control problems in the lower dimension space.
We also discuss the impacts of different numbers of the RNN layers and training epochs on the trade-off between solution quality and related computational efforts.
arXiv Detail & Related papers (2022-01-01T08:40:17Z) - Neural-iLQR: A Learning-Aided Shooting Method for Trajectory
Optimization [17.25824905485415]
We present Neural-iLQR, a learning-aided shooting method over the unconstrained control space.
It is shown to outperform the conventional iLQR significantly in the presence of inaccuracies in system models.
arXiv Detail & Related papers (2020-11-21T07:17:28Z) - Iterative Surrogate Model Optimization (ISMO): An active learning
algorithm for PDE constrained optimization with deep neural networks [14.380314061763508]
We present a novel active learning algorithm, termed as iterative surrogate model optimization (ISMO)
This algorithm is based on deep neural networks and its key feature is the iterative selection of training data through a feedback loop between deep neural networks and any underlying standard optimization algorithm.
arXiv Detail & Related papers (2020-08-13T07:31:07Z) - An Asymptotically Optimal Multi-Armed Bandit Algorithm and
Hyperparameter Optimization [48.5614138038673]
We propose an efficient and robust bandit-based algorithm called Sub-Sampling (SS) in the scenario of hyper parameter search evaluation.
We also develop a novel hyper parameter optimization algorithm called BOSS.
Empirical studies validate our theoretical arguments of SS and demonstrate the superior performance of BOSS on a number of applications.
arXiv Detail & Related papers (2020-07-11T03:15:21Z) - Single-step deep reinforcement learning for open-loop control of laminar
and turbulent flows [0.0]
This research gauges the ability of deep reinforcement learning (DRL) techniques to assist the optimization and control of fluid mechanical systems.
It combines a novel, "degenerate" version of the prototypical policy optimization (PPO) algorithm, that trains a neural network in optimizing the system only once per learning episode.
arXiv Detail & Related papers (2020-06-04T16:11:26Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Loss landscapes and optimization in over-parameterized non-linear
systems and neural networks [20.44438519046223]
We show that wide neural networks satisfy the PL$*$ condition, which explains the (S)GD convergence to a global minimum.
We show that wide neural networks satisfy the PL$*$ condition, which explains the (S)GD convergence to a global minimum.
arXiv Detail & Related papers (2020-02-29T17:18:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.