Efficient Transition State Searches by Freezing String Method with Graph Neural Network Potentials
- URL: http://arxiv.org/abs/2501.06159v2
- Date: Fri, 19 Sep 2025 18:42:33 GMT
- Title: Efficient Transition State Searches by Freezing String Method with Graph Neural Network Potentials
- Authors: Jonah Marks, Joseph Gomes,
- Abstract summary: We develop and fine-tune a graph neural network (GNN) PES to accelerate transition state searches for organic reactions.<n>Our model achieves a 100% success rate, locating the reference TSs in all cases.<n>Fine-tuning reduces GNN-FT errors by orders of magnitude for out-of-distribution cases.
- Score: 0.10742675209112619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transition state (TS) searches are a critical bottleneck in computational studies of chemical reactivity, as accurately capturing complex phenomena like bond breaking and formation events requires repeated evaluations of expensive ab-initio potential energy surfaces (PESs). While numerous algorithms have been developed to locate TSs efficiently, the computational cost of PES evaluations remains a key limitation. In this work, we develop and fine-tune a graph neural network (GNN) PES to accelerate TS searches for organic reactions. Our GNN of choice, SchNet, is first pre-trained on the ANI-1 dataset and subsequently fine-tuned on a small dataset of reactant, product, and TS structures. We integrate this GNN PES into the Freezing String Method (FSM), enabling rapid generation of TS guess geometries. Across a benchmark suite of chemically diverse reactions, our fine-tuned model (GNN-FT) achieves a 100% success rate, locating the reference TSs in all cases while reducing the number of ab-initio calculations by 72% on average compared to conventional DFT-based FSM searches. Fine-tuning reduces GNN-FT errors by orders of magnitude for out-of-distribution cases such as non-covalent interactions, and improves TS-region predictions with comparatively little data. Analysis of transition state geometries and energy errors shows that GNN-FT captures PES along the reaction coordinate with sufficient accuracy to serve as a reliable DFT surrogate. These results demonstrate that modern GNN potentials, when properly trained, can significantly reduce the cost of TS searches and broaden the scope and size of systems considered in chemical reactivity studies.
Related papers
- Data-Driven Deep MIMO Detection:Network Architectures and Generalization Analysis [50.20709408241935]
This paper proposes inspecting the fully data-driven DeepSIC detection within a Network-of-MLPs architecture.<n>Within such an architecture, DeepSIC can be upgraded as a graph-based message-passing process using Graph Neural Networks (GNNs)<n>GNNSIC achieves excellent expressivity comparable to DeepSIC with substantially fewer trainable parameters.
arXiv Detail & Related papers (2026-02-13T04:38:51Z) - Renormalization Group Guided Tensor Network Structure Search [58.0378300612202]
Network structure search (TN-SS) aims to automatically discover optimal network topologies and rank robustness for efficient tensor decomposition in high-dimensional data representation.<n>We propose RGTN (Renormalization Group guided Network search), a physics-inspired framework transforming TN-SS via multi-scale renormalization group flows.
arXiv Detail & Related papers (2025-12-31T06:31:43Z) - Generating transition states of chemical reactions via distance-geometry-based flow matching [45.432950944168205]
We propose TS-DFM, a flow matching framework that predicts transition states from reactants and products.<n>On the benchmark dataset Transition1X, TS-DFM outperforms the previous state-of-the-art method React-OT by 30% in structural accuracy.
arXiv Detail & Related papers (2025-11-21T13:15:25Z) - Quantization Meets Spikes: Lossless Conversion in the First Timestep via Polarity Multi-Spike Mapping [8.32624081553367]
Spiking neural networks (SNNs) offer advantages in computational efficiency via event-driven computing.<n>Traditional artificial neural networks (Anns) often suffer from high computational and energy costs during training.<n>Anns-to-SNN conversion approach still remains a valuable and practical alternative.
arXiv Detail & Related papers (2025-08-20T08:30:30Z) - Accurate Ab-initio Neural-network Solutions to Large-Scale Electronic Structure Problems [52.19558333652367]
We present finite-range embeddings (FiRE) for accurate large-scale ab-initio electronic structure calculations.
FiRE reduces the complexity of neural-network variational Monte Carlo (NN-VMC) by $sim ntextel$, the number of electrons.
We validate our method's accuracy on various challenging systems, including biochemical compounds and organometallic compounds.
arXiv Detail & Related papers (2025-04-08T14:28:54Z) - Thermodynamic Bound on Energy and Negentropy Costs of Inference in Deep Neural Networks [0.0]
The fundamental thermodynamic bound is derived for the energy cost of inference in Deep Neural Networks (DNNs)
We show that the linear operations in DNNs can, in principle, be performed reversibly, whereas the non-linear activation functions impose an unavoidable energy cost.
arXiv Detail & Related papers (2025-03-13T02:35:07Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.<n>A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.<n>The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Neural Network Emulator for Atmospheric Chemical ODE [6.84242299603086]
We propose a Neural Network Emulator for fast chemical concentration modeling.
To extract the hidden correlations between initial states and future time evolution, we propose ChemNNE.
Our approach achieves state-of-the-art performance in modeling accuracy and computational speed.
arXiv Detail & Related papers (2024-08-03T17:43:10Z) - Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations [58.130170155147205]
Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost.
Recent works proposed amortizing the cost by learning generalized wave functions across different structures and compounds instead of solving each problem independently.
This work tackles the problem by defining overparametrized, fully learnable neural wave functions suitable for generalization across molecules.
arXiv Detail & Related papers (2024-05-23T16:30:51Z) - Accurate Computation of Quantum Excited States with Neural Networks [4.99320937849508]
We present a variational Monte Carlo algorithm for estimating the lowest excited states of a quantum system.
Our method is the first deep learning approach to achieve accurate vertical excitation energies on benzene-scale molecules.
We expect this technique will be of great interest for applications to atomic, nuclear and condensed matter physics.
arXiv Detail & Related papers (2023-08-31T16:27:08Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Energy Transformer [64.22957136952725]
Our work combines aspects of three promising paradigms in machine learning, namely, attention mechanism, energy-based models, and associative memory.
We propose a novel architecture, called the Energy Transformer (or ET for short), that uses a sequence of attention layers that are purposely designed to minimize a specifically engineered energy function.
arXiv Detail & Related papers (2023-02-14T18:51:22Z) - Graph Neural Networks with Trainable Adjacency Matrices for Fault
Diagnosis on Multivariate Sensor Data [69.25738064847175]
It is necessary to consider the behavior of the signals in each sensor separately, to take into account their correlation and hidden relationships with each other.
The graph nodes can be represented as data from the different sensors, and the edges can display the influence of these data on each other.
It was proposed to construct a graph during the training of graph neural network. This allows to train models on data where the dependencies between the sensors are not known in advance.
arXiv Detail & Related papers (2022-10-20T11:03:21Z) - Learned Force Fields Are Ready For Ground State Catalyst Discovery [60.41853574951094]
We present evidence that learned density functional theory (DFT'') force fields are ready for ground state catalyst discovery.
Key finding is that relaxation using forces from a learned potential yields structures with similar or lower energy to those relaxed using the RPBE functional in over 50% of evaluated systems.
We show that a force field trained on a locally harmonic energy surface with the same minima as a target DFT energy is also able to find lower or similar energy structures in over 50% of cases.
arXiv Detail & Related papers (2022-09-26T07:16:43Z) - NeuralNEB -- Neural Networks can find Reaction Paths Fast [7.7365628406567675]
Quantum mechanical methods like Density Functional Theory (DFT) are used with great success alongside efficient search algorithms for studying kinetics of reactive systems.
Machine Learning (ML) models have turned out to be excellent emulators of small molecule DFT calculations and could possibly replace DFT in such tasks.
In this paper we train state of the art equivariant Graph Neural Network (GNN)-based models on around 10.000 elementary reactions from the Transition1x dataset.
arXiv Detail & Related papers (2022-07-20T15:29:45Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Quantum activation functions for quantum neural networks [0.0]
We show how to approximate any analytic function to any required accuracy without the need to measure the states encoding the information.
Our results recast the science of artificial neural networks in the architecture of gate-model quantum computers.
arXiv Detail & Related papers (2022-01-10T23:55:49Z) - Gaussian Moments as Physically Inspired Molecular Descriptors for
Accurate and Scalable Machine Learning Potentials [0.0]
We propose a machine learning method for constructing high-dimensional potential energy surfaces based on feed-forward neural networks.
The accuracy of the developed approach in representing both chemical and configurational spaces is comparable to the one of several established machine learning models.
arXiv Detail & Related papers (2021-09-15T16:46:46Z) - Supervised Learning and the Finite-Temperature String Method for
Computing Committor Functions and Reaction Rates [0.0]
A central object in the computational studies of rare events is the committor function.
We show additional modifications are needed to improve the accuracy of the algorithm.
arXiv Detail & Related papers (2021-07-28T17:44:00Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Does the brain function as a quantum phase computer using phase ternary
computation? [0.0]
We provide evidence that the fundamental basis of nervous communication is derived from a pressure pulse/soliton capable of computation.
We demonstrate that the contemporary theory of nerve conduction based on cable theory is inappropriate to account for the short computational time necessary.
Deconstruction of the brain neural network suggests that it is a member of a group of Quantum phase computers of which the Turing machine is the simplest.
arXiv Detail & Related papers (2020-12-04T08:00:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.