Online Control in Population Dynamics
- URL: http://arxiv.org/abs/2406.01799v2
- Date: Thu, 6 Jun 2024 13:29:48 GMT
- Title: Online Control in Population Dynamics
- Authors: Noah Golowich, Elad Hazan, Zhou Lu, Dhruv Rohatgi, Y. Jennifer Sun,
- Abstract summary: We propose a new framework based on the paradigm of online control.
We first characterize a set of linear dynamical systems that can naturally model evolving populations.
We then give an efficient gradient-based controller for these systems, with near-optimal regret bounds.
- Score: 32.09385328027713
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The study of population dynamics originated with early sociological works but has since extended into many fields, including biology, epidemiology, evolutionary game theory, and economics. Most studies on population dynamics focus on the problem of prediction rather than control. Existing mathematical models for control in population dynamics are often restricted to specific, noise-free dynamics, while real-world population changes can be complex and adversarial. To address this gap, we propose a new framework based on the paradigm of online control. We first characterize a set of linear dynamical systems that can naturally model evolving populations. We then give an efficient gradient-based controller for these systems, with near-optimal regret bounds with respect to a broad class of linear policies. Our empirical evaluations demonstrate the effectiveness of the proposed algorithm for control in population dynamics even for non-linear models such as SIR and replicator dynamics.
Related papers
- Logarithmic Regret for Nonlinear Control [5.473636587010879]
We address the problem of learning to control an unknown nonlinear dynamical system through sequential interactions.
Motivated by high-stakes applications in which mistakes can be catastrophic, we study situations where it is possible for fast sequential learning to occur.
arXiv Detail & Related papers (2025-01-17T15:42:42Z) - Reinforcement Learning for Control of Non-Markovian Cellular Population Dynamics [1.03590082373586]
We apply reinforcement learning to identify informed dosing strategies to control cell populations evolving under novel non-Markovian dynamics.
We find that model-free deep RL is able to recover exact solutions and control cell populations even in the presence of long-range temporal dynamics.
arXiv Detail & Related papers (2024-10-11T01:02:30Z) - Learning System Dynamics without Forgetting [60.08612207170659]
Predicting trajectories of systems with unknown dynamics is crucial in various research fields, including physics and biology.
We present a novel framework of Mode-switching Graph ODE (MS-GODE), which can continually learn varying dynamics.
We construct a novel benchmark of biological dynamic systems, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Sparsity in Partially Controllable Linear Systems [56.142264865866636]
We study partially controllable linear dynamical systems specified by an underlying sparsity pattern.
Our results characterize those state variables which are irrelevant for optimal control.
arXiv Detail & Related papers (2021-10-12T16:41:47Z) - GEM: Group Enhanced Model for Learning Dynamical Control Systems [78.56159072162103]
We build effective dynamical models that are amenable to sample-based learning.
We show that learning the dynamics on a Lie algebra vector space is more effective than learning a direct state transition model.
This work sheds light on a connection between learning of dynamics and Lie group properties, which opens doors for new research directions.
arXiv Detail & Related papers (2021-04-07T01:08:18Z) - Hierarchical Decomposition of Nonlinear Dynamics and Control for System
Identification and Policy Distillation [39.83837705993256]
Current trends in reinforcement learning (RL) focus on complex representations of dynamics and policies.
We take inspiration from the control community and apply the principles of hybrid switching systems in order to break down complex dynamics into simpler components.
arXiv Detail & Related papers (2020-05-04T12:40:59Z) - Learning Stable Deep Dynamics Models [91.90131512825504]
We propose an approach for learning dynamical systems that are guaranteed to be stable over the entire state space.
We show that such learning systems are able to model simple dynamical systems and can be combined with additional deep generative models to learn complex dynamics.
arXiv Detail & Related papers (2020-01-17T00:04:45Z) - Implicit Regularization and Momentum Algorithms in Nonlinearly
Parameterized Adaptive Control and Prediction [13.860437051795419]
We exploit strong connections between classical adaptive nonlinear control techniques and recent progress in machine learning.
We show that there exists considerable untapped potential in algorithm development for both adaptive nonlinear control and adaptive dynamics prediction.
arXiv Detail & Related papers (2019-12-31T03:13:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.