Supervised Learning and the Finite-Temperature String Method for
Computing Committor Functions and Reaction Rates
- URL: http://arxiv.org/abs/2107.13522v1
- Date: Wed, 28 Jul 2021 17:44:00 GMT
- Title: Supervised Learning and the Finite-Temperature String Method for
Computing Committor Functions and Reaction Rates
- Authors: Muhammad R. Hasyim, Clay H. Batton, Kranthi K. Mandadapu
- Abstract summary: A central object in the computational studies of rare events is the committor function.
We show additional modifications are needed to improve the accuracy of the algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A central object in the computational studies of rare events is the committor
function. Though costly to compute, the committor function encodes complete
mechanistic information of the processes involving rare events, including
reaction rates and transition-state ensembles. Under the framework of
transition path theory (TPT), recent work [1] proposes an algorithm where a
feedback loop couples a neural network that models the committor function with
importance sampling, mainly umbrella sampling, which collects data needed for
adaptive training. In this work, we show additional modifications are needed to
improve the accuracy of the algorithm. The first modification adds elements of
supervised learning, which allows the neural network to improve its prediction
by fitting to sample-mean estimates of committor values obtained from short
molecular dynamics trajectories. The second modification replaces the
committor-based umbrella sampling with the finite-temperature string (FTS)
method, which enables homogeneous sampling in regions where transition pathways
are located. We test our modifications on low-dimensional systems with
non-convex potential energy where reference solutions can be found via
analytical or the finite element methods, and show how combining supervised
learning and the FTS method yields accurate computation of committor functions
and reaction rates. We also provide an error analysis for algorithms that use
the FTS method, using which reaction rates can be accurately estimated during
training with a small number of samples.
Related papers
- Emergence in non-neural models: grokking modular arithmetic via average gradient outer product [16.911836722312152]
We show that grokking is not specific to neural networks nor to gradient descent-based optimization.
We show that this phenomenon occurs when learning modular arithmetic with Recursive Feature Machines.
Our results demonstrate that emergence can result purely from learning task-relevant features.
arXiv Detail & Related papers (2024-07-29T17:28:58Z) - Deep Learning Method for Computing Committor Functions with Adaptive Sampling [4.599618895656792]
We propose a deep learning method with two novel adaptive sampling schemes (I and II)
In the two schemes, the data are generated actively with a modified potential where the bias potential is constructed from the learned committor function.
We theoretically demonstrate the advantages of the sampling schemes and show that the data in sampling scheme II are uniformly distributed along the transition tube.
arXiv Detail & Related papers (2024-04-09T10:53:29Z) - Variational Sampling of Temporal Trajectories [39.22854981703244]
We introduce a mechanism to learn the distribution of trajectories by parameterizing the transition function $f$ explicitly as an element in a function space.
Our framework allows efficient synthesis of novel trajectories, while also directly providing a convenient tool for inference.
arXiv Detail & Related papers (2024-03-18T02:12:12Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - Label-free timing analysis of SiPM-based modularized detectors with
physics-constrained deep learning [9.234802409391111]
We propose a novel method based on deep learning for timing analysis of modularized detectors.
We mathematically demonstrate the existence of the optimal function desired by the method, and give a systematic algorithm for training and calibration of the model.
arXiv Detail & Related papers (2023-04-24T09:16:31Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design [68.86220939532373]
The finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format.
The proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm.
arXiv Detail & Related papers (2022-07-19T16:37:24Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.