Double-Ended Synthesis Planning with Goal-Constrained Bidirectional Search
- URL: http://arxiv.org/abs/2407.06334v2
- Date: Fri, 1 Nov 2024 16:45:48 GMT
- Title: Double-Ended Synthesis Planning with Goal-Constrained Bidirectional Search
- Authors: Kevin Yu, Jihye Roh, Ziang Li, Wenhao Gao, Runzhong Wang, Connor W. Coley,
- Abstract summary: We present a formulation of synthesis planning with starting material constraints.
We propose Double-Ended Synthesis Planning (DESP), a novel CASP algorithm under a bidirectional graph search scheme.
DESP can make use of existing one-step retrosynthesis models, and we anticipate its performance to scale as these one-step model capabilities improve.
- Score: 27.09693306892583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer-aided synthesis planning (CASP) algorithms have demonstrated expert-level abilities in planning retrosynthetic routes to molecules of low to moderate complexity. However, current search methods assume the sufficiency of reaching arbitrary building blocks, failing to address the common real-world constraint where using specific molecules is desired. To this end, we present a formulation of synthesis planning with starting material constraints. Under this formulation, we propose Double-Ended Synthesis Planning (DESP), a novel CASP algorithm under a bidirectional graph search scheme that interleaves expansions from the target and from the goal starting materials to ensure constraint satisfiability. The search algorithm is guided by a goal-conditioned cost network learned offline from a partially observed hypergraph of valid chemical reactions. We demonstrate the utility of DESP in improving solve rates and reducing the number of search expansions by biasing synthesis planning towards expert goals on multiple new benchmarks. DESP can make use of existing one-step retrosynthesis models, and we anticipate its performance to scale as these one-step model capabilities improve.
Related papers
- Directed Exploration in Reinforcement Learning from Linear Temporal Logic [59.707408697394534]
Linear temporal logic (LTL) is a powerful language for task specification in reinforcement learning.
We show that the synthesized reward signal remains fundamentally sparse, making exploration challenging.
We show how better exploration can be achieved by further leveraging the specification and casting its corresponding Limit Deterministic B"uchi Automaton (LDBA) as a Markov reward process.
arXiv Detail & Related papers (2024-08-18T14:25:44Z) - Re-evaluating Retrosynthesis Algorithms with Syntheseus [13.384695742156152]
We present a synthesis planning library with an extensive benchmarking framework, called syntheseus.
We demonstrate the capabilities of syntheseus by re-evaluating several previous retrosynthesis algorithms.
We end with guidance for future works in this area, and call the community to engage in the discussion on how to improve benchmarks for synthesis planning.
arXiv Detail & Related papers (2023-10-30T17:59:04Z) - Models Matter: The Impact of Single-Step Retrosynthesis on Synthesis
Planning [0.8620335948752805]
Retrosynthesis consists of breaking down a chemical compound step-by-step into molecular precursors.
Its two primary research directions, single-step retrosynthesis prediction and multi-step synthesis planning, are inherently intertwined.
We show that the choice of the single-step model can improve the overall success rate of synthesis planning by up to +28%.
arXiv Detail & Related papers (2023-08-10T12:04:47Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Maximize to Explore: One Objective Function Fusing Estimation, Planning,
and Exploration [87.53543137162488]
We propose an easy-to-implement online reinforcement learning (online RL) framework called textttMEX.
textttMEX integrates estimation and planning components while balancing exploration exploitation automatically.
It can outperform baselines by a stable margin in various MuJoCo environments with sparse rewards.
arXiv Detail & Related papers (2023-05-29T17:25:26Z) - Forward LTLf Synthesis: DPLL At Work [1.370633147306388]
We propose a new AND-OR graph search framework for synthesis of Linear Temporal Logic on finite traces (LTLf)
Within such framework, we devise a procedure inspired by the Davis-Putnam-Logemann-Loveland (DPLL) algorithm to generate the next available agent-environment moves.
We also propose a novel equivalence check for search nodes based on syntactic equivalence of state formulas.
arXiv Detail & Related papers (2023-02-27T14:33:50Z) - Semantic Probabilistic Layers for Neuro-Symbolic Learning [83.25785999205932]
We design a predictive layer for structured-output prediction (SOP)
It can be plugged into any neural network guaranteeing its predictions are consistent with a set of predefined symbolic constraints.
Our Semantic Probabilistic Layer (SPL) can model intricate correlations, and hard constraints, over a structured output space.
arXiv Detail & Related papers (2022-06-01T12:02:38Z) - Amortized Tree Generation for Bottom-up Synthesis Planning and
Synthesizable Molecular Design [2.17167311150369]
We report an amortized approach to generate synthetic pathways as a Markov decision process conditioned on a target molecular embedding.
This approach allows us to conduct synthesis planning in a bottom-up manner and design synthesizable molecules by decoding from optimized conditional codes.
arXiv Detail & Related papers (2021-10-12T22:43:25Z) - Quantum Embedding Search for Quantum Machine Learning [2.7612093695074456]
We introduce a novel quantum embedding search algorithm (QES), pronounced as "quest"
We establish the connection between the structures of quantum embedding and the representations of directed multi-graphs, enabling a well-defined search space.
We demonstrate the feasibility of our proposed approach on synthesis and Iris datasets, which empirically shows that quantum embedding architecture by QES outperforms manual designs.
arXiv Detail & Related papers (2021-05-25T11:50:57Z) - BUSTLE: Bottom-Up Program Synthesis Through Learning-Guided Exploration [72.88493072196094]
We present a new synthesis approach that leverages learning to guide a bottom-up search over programs.
In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a set of input-output examples.
We show that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches.
arXiv Detail & Related papers (2020-07-28T17:46:18Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.