Enhanced Sampling for Efficient Learning of Coarse-Grained Machine Learning Potentials
- URL: http://arxiv.org/abs/2510.11148v1
- Date: Mon, 13 Oct 2025 08:40:13 GMT
- Title: Enhanced Sampling for Efficient Learning of Coarse-Grained Machine Learning Potentials
- Authors: Weilong Chen, Franz Görlich, Paul Fuchs, Julija Zavadlav,
- Abstract summary: We introduce enhanced sampling to bias along CG degrees of freedom for data generation, and then re-compute the forces with respect to the unbiased potential.<n>This strategy simultaneously shortens the simulation time required to produce equilibrated data and enriches sampling in transition regions, while preserving the correct PMF.<n>Our findings support the use of enhanced sampling for force matching as a promising direction to improve the accuracy and reliability of CGs.
- Score: 2.8355616606687506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Coarse-graining (CG) enables molecular dynamics (MD) simulations of larger systems and longer timescales that are otherwise infeasible with atomistic models. Machine learning potentials (MLPs), with their capacity to capture many-body interactions, can provide accurate approximations of the potential of mean force (PMF) in CG models. Current CG MLPs are typically trained in a bottom-up manner via force matching, which in practice relies on configurations sampled from the unbiased equilibrium Boltzmann distribution to ensure thermodynamic consistency. This convention poses two key limitations: first, sufficiently long atomistic trajectories are needed to reach convergence; and second, even once equilibrated, transition regions remain poorly sampled. To address these issues, we employ enhanced sampling to bias along CG degrees of freedom for data generation, and then recompute the forces with respect to the unbiased potential. This strategy simultaneously shortens the simulation time required to produce equilibrated data and enriches sampling in transition regions, while preserving the correct PMF. We demonstrate its effectiveness on the M\"uller-Brown potential and capped alanine, achieving notable improvements. Our findings support the use of enhanced sampling for force matching as a promising direction to improve the accuracy and reliability of CG MLPs.
Related papers
- Coarse-Grained Boltzmann Generators [2.8880597165704]
We propose a principled framework that unifies scalable reduced-order modeling with the exactness of importance sampling.<n>CG-BGs act in a coarse-grained coordinate space, using a learned potential of mean force to reweight samples generated by a flow-based model.<n>Our results demonstrate that CG-BGs faithfully capture complex interactions mediated by explicit solvent within highly reduced representations.
arXiv Detail & Related papers (2026-02-11T08:37:13Z) - Equivariant Evidential Deep Learning for Interatomic Potentials [55.6997213490859]
Uncertainty quantification is critical for assessing the reliability of machine learning interatomic potentials in molecular dynamics simulations.<n>Existing UQ approaches for MLIPs are often limited by high computational cost or suboptimal performance.<n>We propose textitEquivariant Evidential Deep Learning for Interatomic Potentials ($texte2$IP), a backbone-agnostic framework that models atomic forces and their uncertainty jointly.
arXiv Detail & Related papers (2026-02-11T02:00:25Z) - FALCON: Few-step Accurate Likelihoods for Continuous Flows [78.37361800856583]
We propose Few-step Accurate Likelihoods for Continuous Flows (FALCON), which allows for few-step sampling with a likelihood accurate enough for importance sampling applications.<n>We show FALCON outperforms state-of-the-art normalizing flow models for molecular Boltzmann sampling and is two orders of magnitude faster than the equivalently performing CNF model.
arXiv Detail & Related papers (2025-12-10T18:47:25Z) - MaP: A Unified Framework for Reliable Evaluation of Pre-training Dynamics [72.00014675808228]
Instability in Large Language Models evaluation process obscures true learning dynamics.<n>We introduce textbfMaP, a framework that integrates underlineMerging underlineand the underlinePass@k metric.<n>Experiments show that MaP yields significantly smoother performance curves, reduces inter-run variance, and ensures more consistent rankings.
arXiv Detail & Related papers (2025-10-10T11:40:27Z) - Reframing Generative Models for Physical Systems using Stochastic Interpolants [45.16806809746592]
Generative models have emerged as powerful surrogates for physical systems, demonstrating increased accuracy, stability, and/or statistical fidelity.<n>Most approaches rely on iteratively denoising a Gaussian, a choice that may not be the most effective for autoregressive prediction tasks in PDEs and dynamical systems such as climate.<n>In this work, we benchmark generative models across diverse physical domains and tasks, and highlight the role of interpolants.
arXiv Detail & Related papers (2025-09-30T14:02:00Z) - Consistent Sampling and Simulation: Molecular Dynamics with Energy-Based Diffusion Models [50.77646970127369]
We propose an energy-based diffusion model with a Fokker--Planck-derived regularization term to enforce consistency.<n>We demonstrate our approach by sampling and simulating multiple biomolecular systems, including fast-folding proteins.
arXiv Detail & Related papers (2025-06-20T16:38:29Z) - Beyond Force Metrics: Pre-Training MLFFs for Stable MD Simulations [5.913538953257869]
Machine-learning force fields (MLFFs) have emerged as a promising solution for speeding up ab initio molecular dynamics (MD) simulations.<n>In this work, we employ GemNet-T, a graph neural network model, as an MLFF and investigate two training strategies.<n>We find that lower force errors do not necessarily guarantee stable MD simulations.
arXiv Detail & Related papers (2025-06-17T00:58:56Z) - Energy-Based Coarse-Graining in Molecular Dynamics: A Flow-Based Framework Without Data [0.0]
We introduce a data-free generative framework for coarse-graining that directly targets the all-atom Boltzmann distribution.<n>A potentially learnable, bijective map from the full latent space to the all-atom configuration space enables automatic and accurate reconstruction of molecular structures.
arXiv Detail & Related papers (2025-04-29T17:05:27Z) - Scalable Equilibrium Sampling with Sequential Boltzmann Generators [60.00515282300297]
We extend the Boltzmann generator framework with two key contributions.<n>The first is a highly efficient Transformer-based normalizing flow operating directly on all-atom Cartesian coordinates.<n>In particular, we perform inference-time scaling of flow samples using a continuous-time variant of sequential Monte Carlo.
arXiv Detail & Related papers (2025-02-25T18:59:13Z) - Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - From Peptides to Nanostructures: A Euclidean Transformer for Fast and
Stable Machine Learned Force Fields [5.013279299982324]
We propose a transformer architecture called SO3krates that combines sparse equivariant representations with a self-attention mechanism.
SO3krates achieves a unique combination of accuracy, stability, and speed that enables insightful analysis of quantum properties of matter on extended time and system size scales.
arXiv Detail & Related papers (2023-09-21T09:22:05Z) - Non-Generative Energy Based Models [3.1447898427012473]
Energy-based models (EBM) have become increasingly popular within computer vision.
We propose a non-generative training approach, Non-Generative EBM (NG-EBM)
We show that our NG-EBM training strategy retains many of the benefits of EBM in calibration, out-of-distribution detection, and adversarial resistance.
arXiv Detail & Related papers (2023-04-03T18:47:37Z) - Diffusion Probabilistic Model Made Slim [128.2227518929644]
We introduce a customized design for slim diffusion probabilistic models (DPM) for light-weight image synthesis.
We achieve 8-18x computational complexity reduction as compared to the latent diffusion models on a series of conditional and unconditional image generation tasks.
arXiv Detail & Related papers (2022-11-27T16:27:28Z) - Slow semiclassical dynamics of a two-dimensional Hubbard model in
disorder-free potentials [77.34726150561087]
We show that introduction of harmonic and spin-dependent linear potentials sufficiently validates fTWA for longer times.
In particular, we focus on a finite two-dimensional system and show that at intermediate linear potential strength, the addition of a harmonic potential and spin dependence of the tilt, results in subdiffusive dynamics.
arXiv Detail & Related papers (2022-10-03T16:51:25Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.