MLIMC: Machine learning-based implicit-solvent Monte Carlo
- URL: http://arxiv.org/abs/2109.12100v1
- Date: Fri, 24 Sep 2021 17:47:07 GMT
- Title: MLIMC: Machine learning-based implicit-solvent Monte Carlo
- Authors: Jiahui Chen, Weihua Geng, Guo-Wei Wei
- Abstract summary: We develop a machine learning-based implicit-solvent Monte Carlo (MLIMC) method by combining the advantages of both implicit solvent models in accuracy and efficiency.
Specifically, the MLIMC method uses a fast and accurate PB-based machine learning scheme to compute the electrostatic solvation free energy at each step.
- Score: 1.8284033205909684
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Monte Carlo (MC) methods are important computational tools for molecular
structure optimizations and predictions. When solvent effects are explicitly
considered, MC methods become very expensive due to the large degree of freedom
associated with the water molecules and mobile ions. Alternatively
implicit-solvent MC can largely reduce the computational cost by applying a
mean field approximation to solvent effects and meanwhile maintains the atomic
detail of the target molecule. The two most popular implicit-solvent models are
the Poisson-Boltzmann (PB) model and the Generalized Born (GB) model in a way
such that the GB model is an approximation to the PB model but is much faster
in simulation time. In this work, we develop a machine learning-based
implicit-solvent Monte Carlo (MLIMC) method by combining the advantages of both
implicit solvent models in accuracy and efficiency. Specifically, the MLIMC
method uses a fast and accurate PB-based machine learning (PBML) scheme to
compute the electrostatic solvation free energy at each step. We validate our
MLIMC method by using a benzene-water system and a protein-water system. We
show that the proposed MLIMC method has great advantages in speed and accuracy
for molecular structure optimization and prediction.
Related papers
- Predicting solvation free energies with an implicit solvent machine learning potential [0.0]
We introduce a Solvation Free Energy Path Reweighting (ReSolv) framework to parametrize an implicit solvent ML potential for small organic molecules.
With a combination of top-down (experimental hydration free energy data) and bottom-up (ab initio data of molecules in a vacuum) learning, ReSolv bypasses the need for intractable ab initio data of molecules in explicit bulk solvent.
Compared to the explicit solvent ML potential, ReSolv offers a computational speedup of four orders of magnitude and attains closer agreement with experiments.
arXiv Detail & Related papers (2024-05-31T20:28:08Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Gaussian process regression and conditional Karhunen-Lo\'{e}ve models
for data assimilation in inverse problems [68.8204255655161]
We present a model inversion algorithm, CKLEMAP, for data assimilation and parameter estimation in partial differential equation models.
The CKLEMAP method provides better scalability compared to the standard MAP method.
arXiv Detail & Related papers (2023-01-26T18:14:12Z) - Prediction of liquid fuel properties using machine learning models with
Gaussian processes and probabilistic conditional generative learning [56.67751936864119]
The present work aims to construct cheap-to-compute machine learning (ML) models to act as closure equations for predicting the physical properties of alternative fuels.
Those models can be trained using the database from MD simulations and/or experimental measurements in a data-fusion-fidelity approach.
The results show that ML models can predict accurately the fuel properties of a wide range of pressure and temperature conditions.
arXiv Detail & Related papers (2021-10-18T14:43:50Z) - Optimal control of quantum thermal machines using machine learning [0.0]
We show that differentiable programming (DP) can be employed to optimize finite-time thermodynamical processes in a quantum thermal machine.
We formulate the STA driving protocol as a constrained optimization task and apply DP to find optimal driving profiles for an appropriate figure of merit.
Our method and results demonstrate that ML is beneficial both for solving hard-constrained quantum control problems and for devising and assessing their theoretical groundwork.
arXiv Detail & Related papers (2021-08-27T18:00:49Z) - Graphical Gaussian Process Regression Model for Aqueous Solvation Free
Energy Prediction of Organic Molecules in Redox Flow Battery [2.7919873713279033]
We present a machine learning (ML) model that can learn and predict the aqueous solvation free energy of an organic molecule.
We demonstrate that our ML model can predict the solvation free energy of molecules at chemical accuracy with a mean absolute error of less than 1 kcal/mol.
arXiv Detail & Related papers (2021-06-15T13:48:26Z) - Machine Learning Implicit Solvation for Molecular Dynamics [0.0]
We introduce Bornet, a graph neural network, to model the implicit solvent potential of mean force.
The success of this novel method demonstrates the potential benefit of applying machine learning methods in accurate modeling of solvent effects.
arXiv Detail & Related papers (2021-06-14T15:21:45Z) - Covert Model Poisoning Against Federated Learning: Algorithm Design and
Optimization [76.51980153902774]
Federated learning (FL) is vulnerable to external attacks on FL models during parameters transmissions.
In this paper, we propose effective MP algorithms to combat state-of-the-art defensive aggregation mechanisms.
Our experimental results demonstrate that the proposed CMP algorithms are effective and substantially outperform existing attack mechanisms.
arXiv Detail & Related papers (2021-01-28T03:28:18Z) - No MCMC for me: Amortized sampling for fast and stable training of
energy-based models [62.1234885852552]
Energy-Based Models (EBMs) present a flexible and appealing way to represent uncertainty.
We present a simple method for training EBMs at scale using an entropy-regularized generator to amortize the MCMC sampling.
Next, we apply our estimator to the recently proposed Joint Energy Model (JEM), where we match the original performance with faster and stable training.
arXiv Detail & Related papers (2020-10-08T19:17:20Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.