Deep Reinforcement Learning for Controlled Traversing of the Attractor
Landscape of Boolean Models in the Context of Cellular Reprogramming
- URL: http://arxiv.org/abs/2402.08491v2
- Date: Tue, 20 Feb 2024 14:40:23 GMT
- Title: Deep Reinforcement Learning for Controlled Traversing of the Attractor
Landscape of Boolean Models in the Context of Cellular Reprogramming
- Authors: Andrzej Mizera, Jakub Zarzycki
- Abstract summary: We develop a novel computational framework based on deep reinforcement learning that facilitates the identification of reprogramming strategies.
We formulate a control problem in the context of cellular reprogramming for the frameworks of BNs and PBNs under the asynchronous update mode.
We also introduce the notion of a pseudo-attractor and a procedure for identification of pseudo-attractor state during training.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cellular reprogramming can be used for both the prevention and cure of
different diseases. However, the efficiency of discovering reprogramming
strategies with classical wet-lab experiments is hindered by lengthy time
commitments and high costs. In this study, we develop a novel computational
framework based on deep reinforcement learning that facilitates the
identification of reprogramming strategies. For this aim, we formulate a
control problem in the context of cellular reprogramming for the frameworks of
BNs and PBNs under the asynchronous update mode. Furthermore, we introduce the
notion of a pseudo-attractor and a procedure for identification of
pseudo-attractor state during training. Finally, we devise a computational
framework for solving the control problem, which we test on a number of
different models.
Related papers
- Let LLMs Break Free from Overthinking via Self-Braking Tuning [60.08396797526657]
Large reasoning models (LRMs) have significantly enhanced their reasoning capabilities by generating longer chains of thought.<n>This performance gain comes at the cost of a substantial increase in redundant reasoning during the generation process.<n>We propose a novel framework, Self-Braking Tuning (SBT), which tackles overthinking from the perspective of allowing the model to regulate its own reasoning process.
arXiv Detail & Related papers (2025-05-20T16:53:40Z) - Graph Neural Network-Based Reinforcement Learning for Controlling Biological Networks: The GATTACA Framework [0.0]
We explore the use of deep reinforcement learning (DRL) to control network models of complex biological systems.<n>We formulate a novel control problem for Boolean network models under the asynchronous update mode in the context of cellular reprogramming.<n>To leverage the structure of biological systems, we incorporate graph neural networks with graph convolutions into the artificial neural network approximator for the action-value function learned by the DRL agent.
arXiv Detail & Related papers (2025-05-05T15:07:20Z) - Self-Evaluation for Job-Shop Scheduling [1.3927943269211593]
Combinatorial optimization problems, such as scheduling and route planning, are crucial in various industries but are computationally intractable due to their NP-hard nature.
We propose a novel framework that generates and evaluates subsets of assignments, moving beyond traditional stepwise approaches.
arXiv Detail & Related papers (2025-02-12T11:22:33Z) - Iterative Deepening Sampling as Efficient Test-Time Scaling [27.807695570974644]
Recent reasoning models, such as OpenAI's O1 series, have demonstrated exceptional performance on complex reasoning tasks.<n>We propose a novel iterative deepening sampling algorithm framework designed to enhance self-correction and generate higher-quality samples.
arXiv Detail & Related papers (2025-02-08T04:39:51Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.<n>Our approach effectively mitigates Catastrophic Forgetting, outperforming strong Variational CL methods.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Model-Based Reinforcement Learning Control of Reaction-Diffusion
Problems [0.0]
reinforcement learning has been applied to decision-making in several applications, most notably in games.
We introduce two novel reward functions to drive the flow of the transported field.
Results show that certain controls can be implemented successfully in these applications.
arXiv Detail & Related papers (2024-02-22T11:06:07Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Accelerated Policy Learning with Parallel Differentiable Simulation [59.665651562534755]
We present a differentiable simulator and a new policy learning algorithm (SHAC)
Our algorithm alleviates problems with local minima through a smooth critic function.
We show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms.
arXiv Detail & Related papers (2022-04-14T17:46:26Z) - Learning Algorithms for Regenerative Stopping Problems with Applications
to Shipping Consolidation in Logistics [8.111251824291244]
We study regenerative stopping problems in which the system starts anew whenever the controller decides to stop and the long-term average cost is to be minimized.
Traditional model-based solutions involve estimating the underlying process from data and computing strategies for the estimated model.
We compare such solutions to deep reinforcement learning and imitation learning which involve learning a neural network policy from simulations.
arXiv Detail & Related papers (2021-05-05T20:45:46Z) - A Regret Minimization Approach to Iterative Learning Control [61.37088759497583]
We propose a new performance metric, planning regret, which replaces the standard uncertainty assumptions with worst case regret.
We provide theoretical and empirical evidence that the proposed algorithm outperforms existing methods on several benchmarks.
arXiv Detail & Related papers (2021-02-26T13:48:49Z) - Continuous-Time Model-Based Reinforcement Learning [4.427447378048202]
We propose a continuous-time MBRL framework based on a novel actor-critic method.
We implement and test our method on a new ODE-RL suite that explicitly solves continuous-time control systems.
arXiv Detail & Related papers (2021-02-09T11:30:19Z) - Deep RL With Information Constrained Policies: Generalization in
Continuous Control [21.46148507577606]
We show that a natural constraint on information flow might confer onto artificial agents in continuous control tasks.
We implement a novel Capacity-Limited Actor-Critic (CLAC) algorithm.
Our experiments show that compared to alternative approaches, CLAC offers improvements in generalization between training and modified test environments.
arXiv Detail & Related papers (2020-10-09T15:42:21Z) - Managing caching strategies for stream reasoning with reinforcement
learning [18.998260813058305]
Stream reasoning allows efficient decision-making over continuously changing data.
We suggest a novel approach that uses the Conflict-Driven Constraint Learning (CDCL) to efficiently update legacy solutions.
In particular, we study the applicability of reinforcement learning to continuously assess the utility of learned constraints.
arXiv Detail & Related papers (2020-08-07T15:01:41Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.