Implicit Neural Representations for Chemical Reaction Paths
- URL: http://arxiv.org/abs/2502.15843v1
- Date: Thu, 20 Feb 2025 19:44:21 GMT
- Title: Implicit Neural Representations for Chemical Reaction Paths
- Authors: Kalyan Ramakrishnan, Lars L. Schaaf, Chen Lin, Guangrun Wang, Philip Torr,
- Abstract summary: We show that neural networks can be optimized to represent minimum energy paths as continuous functions.<n>In a low-dimensional setting, we demonstrate that a single neural network can learn from existing paths and generalize to unseen systems.
- Score: 20.776507172563697
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We show that neural networks can be optimized to represent minimum energy paths as continuous functions, offering a flexible alternative to discrete path-search methods like Nudged Elastic Band (NEB). Our approach parameterizes reaction paths with a network trained on a loss function that discards tangential energy gradients and enables instant estimation of the transition state. We first validate the method on two-dimensional potentials and then demonstrate its advantages over NEB on challenging atomistic systems where (i) poor initial guesses yield unphysical paths, (ii) multiple competing paths exist, or (iii) the reaction follows a complex multi-step mechanism. Results highlight the versatility of the method -- for instance, a simple adjustment to the sampling strategy during optimization can help escape local-minimum solutions. Finally, in a low-dimensional setting, we demonstrate that a single neural network can learn from existing paths and generalize to unseen systems, showing promise for a universal reaction path representation.
Related papers
- Joint Optimal Transport and Embedding for Network Alignment [66.49765320358361]
We propose a joint optimal transport and embedding framework for network alignment named JOENA.
With a unified objective, the mutual benefits of both methods can be achieved by an alternating optimization schema with guaranteed convergence.
Experiments on real-world networks validate the effectiveness and scalability of JOENA, achieving up to 16% improvement in MRR and 20x speedup.
arXiv Detail & Related papers (2025-02-26T17:28:08Z) - Gradients can train reward models: An Empirical Risk Minimization Approach for Offline Inverse RL and Dynamic Discrete Choice Model [9.531082746970286]
We study the problem of estimating Dynamic Choice (DDC) models, also known as offline Maximum Entropy-Regularized Inverse Reinforcement Learning ( offline MaxEnt-IRL) in machine learning.
The objective is to recover reward or $Q*$ functions that govern agent behavior from offline behavior data.
We propose a globally convergent gradient-based method for solving these problems without the restrictive assumption of linearly parameterized rewards.
arXiv Detail & Related papers (2025-02-19T22:22:20Z) - A neural network approach for solving the Monge-Ampère equation with transport boundary condition [0.0]
This paper introduces a novel neural network-based approach to solving the Monge-Ampere equation with the transport boundary condition.
We leverage multilayer perceptron networks to learn approximate solutions by minimizing a loss function that encompasses the equation's residual, boundary conditions, and convexity constraints.
arXiv Detail & Related papers (2024-10-25T11:54:00Z) - Generalized Flow Matching for Transition Dynamics Modeling [14.76793118877456]
We propose a data-driven approach to warm-up the simulation by learning nonlinearities from local dynamics.
Specifically, we infer a potential energy function from local dynamics data to find plausible paths between two metastable states.
We validate the effectiveness of the proposed method to sample probable paths on both synthetic and real-world molecular systems.
arXiv Detail & Related papers (2024-10-19T15:03:39Z) - Transition Path Sampling with Improved Off-Policy Training of Diffusion Path Samplers [10.210248065533133]
We introduce a novel approach that trains diffusion path samplers (DPS) to address the transition path sampling problem.<n>We reformulate the problem as an amortized sampling from the transition path distribution by minimizing the log-variance divergence between the path distribution induced by DPS and the transition path distribution.<n>We extensively evaluate our approach, termed TPS-DPS, on a synthetic system, small peptide, and challenging fast-folding proteins, demonstrating that it produces more realistic and diverse transition pathways than existing baselines.
arXiv Detail & Related papers (2024-05-30T11:32:42Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Entropic Neural Optimal Transport via Diffusion Processes [105.34822201378763]
We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between continuous probability distributions.
Our algorithm is based on the saddle point reformulation of the dynamic version of EOT which is known as the Schr"odinger Bridge problem.
In contrast to the prior methods for large-scale EOT, our algorithm is end-to-end and consists of a single learning step.
arXiv Detail & Related papers (2022-11-02T14:35:13Z) - Least-Squares ReLU Neural Network (LSNN) Method For Linear
Advection-Reaction Equation [3.6525914200522656]
This paper studies least-squares ReLU neural network method for solving the linear advection-reaction problem with discontinuous solution.
The method is capable of approximating the discontinuous interface of the underlying problem automatically through the free hyper-planes of the ReLU neural network.
arXiv Detail & Related papers (2021-05-25T03:13:15Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - On Sparsity in Overparametrised Shallow ReLU Networks [42.33056643582297]
We study the ability of different regularisation strategies to capture solutions requiring only a finite amount of neurons, even on the infinitely wide regime.
We establish that both schemes are minimised by functions having only a finite number of neurons, irrespective of the amount of overparametrisation.
arXiv Detail & Related papers (2020-06-18T01:35:26Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.