Learning collective variables that preserve transition rates
- URL: http://arxiv.org/abs/2506.01222v2
- Date: Thu, 05 Jun 2025 03:31:31 GMT
- Title: Learning collective variables that preserve transition rates
- Authors: Shashank Sule, Arnav Mehta, Maria K. Cameron,
- Abstract summary: Collective variables (CVs) play a crucial role in capturing rare events in high-dimensional systems.<n>We introduce a general numerical method for designing neural network-based CVs that integrates tools from manifold learning with group-invariant featurization.<n>We provide empirical evidence challenging the necessity of uniform positive definiteness in diffusion tensors for transition rate reproduction.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collective variables (CVs) play a crucial role in capturing rare events in high-dimensional systems, motivating the continual search for principled approaches to their design. In this work, we revisit the framework of quantitative coarse graining and identify the orthogonality condition from Legoll and Lelievre (2010) as a key criterion for constructing CVs that accurately preserve the statistical properties of the original process. We establish that satisfaction of the orthogonality condition enables error estimates for both relative entropy and pathwise distance to scale proportionally with the degree of scale separation. Building on this foundation, we introduce a general numerical method for designing neural network-based CVs that integrates tools from manifold learning with group-invariant featurization. To demonstrate the efficacy of our approach, we construct CVs for butane and achieve a CV that reproduces the anti-gauche transition rate with less than ten percent relative error. Additionally, we provide empirical evidence challenging the necessity of uniform positive definiteness in diffusion tensors for transition rate reproduction and highlight the critical role of light atoms in CV design for molecular dynamics.
Related papers
- Decomposing the Entropy-Performance Exchange: The Missing Keys to Unlocking Effective Reinforcement Learning [106.68304931854038]
Reinforcement learning with verifiable rewards (RLVR) has been widely used for enhancing the reasoning abilities of large language models (LLMs)<n>We conduct a systematic empirical analysis of the entropy-performance exchange mechanism of RLVR across different levels of granularity.<n>Our analysis reveals that, in the rising stage, entropy reduction in negative samples facilitates the learning of effective reasoning patterns.<n>In the plateau stage, learning efficiency strongly correlates with high-entropy tokens present in low-perplexity samples and those located at the end of sequences.
arXiv Detail & Related papers (2025-08-04T10:08:10Z) - A Mixed-Order Phase Transition in Continuous-Variable Quantum Networks [1.3946421495394776]
Quantum networks (QNs) have been predominantly driven by discrete-variable (DV) architectures.<n>We present a new form of entanglement percolation--negativity percolation theory (NegPT)<n>We show that NegPT exhibits a mixed-order phase transition, marked simultaneously by both an abrupt change in global entanglement and a long-range correlation between nodes.
arXiv Detail & Related papers (2025-07-22T10:09:21Z) - Learning Collective Variables from Time-lagged Generation [11.320404950685203]
We propose TLC, a framework that learns CVs directly from time-lagged conditions of a generative model.<n>We validate TLC on the Alanine Dipeptide system using two CV-based enhanced sampling tasks.
arXiv Detail & Related papers (2025-07-10T03:06:21Z) - Identifying Ising and percolation phase transitions based on KAN method [6.086561505970236]
This paper proposes the use of the Kolmogorov-Arnold Network to input raw configurations into a learning model.<n>The results demonstrate that the KAN can indeed predict the critical points of percolation models.
arXiv Detail & Related papers (2025-03-05T13:49:22Z) - Enhancing Robustness of Vision-Language Models through Orthogonality Learning and Self-Regularization [77.62516752323207]
We introduce an orthogonal fine-tuning method for efficiently fine-tuning pretrained weights and enabling enhanced robustness and generalization.
A self-regularization strategy is further exploited to maintain the stability in terms of zero-shot generalization of VLMs, dubbed OrthSR.
For the first time, we revisit the CLIP and CoOp with our method to effectively improve the model on few-shot image classficiation scenario.
arXiv Detail & Related papers (2024-07-11T10:35:53Z) - ROTI-GCV: Generalized Cross-Validation for right-ROTationally Invariant Data [1.194799054956877]
Two key tasks in high-dimensional regularized regression are tuning the regularization strength for accurate predictions and estimating the out-of-sample risk.
We introduce a new framework, ROTI-GCV, for reliably performing cross-validation under challenging conditions.
arXiv Detail & Related papers (2024-06-17T15:50:00Z) - TransFusion: Covariate-Shift Robust Transfer Learning for High-Dimensional Regression [11.040033344386366]
We propose a two-step method with a novel fused-regularizer to improve the learning performance on a target task with limited samples.
Nonasymptotic bound is provided for the estimation error of the target model.
We extend the method to a distributed setting, allowing for a pretraining-finetuning strategy.
arXiv Detail & Related papers (2024-04-01T14:58:16Z) - Learning Collective Variables with Synthetic Data Augmentation through Physics-Inspired Geodesic Interpolation [1.4972659820929493]
In molecular dynamics simulations, rare events, such as protein folding, are typically studied using enhanced sampling techniques.
We propose a simulation-free data augmentation strategy using physics-inspired metrics to generate geodesics resembling protein folding transitions.
arXiv Detail & Related papers (2024-02-02T16:35:02Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - Learning Multiscale Consistency for Self-supervised Electron Microscopy
Instance Segmentation [48.267001230607306]
We propose a pretraining framework that enhances multiscale consistency in EM volumes.
Our approach leverages a Siamese network architecture, integrating strong and weak data augmentations.
It effectively captures voxel and feature consistency, showing promise for learning transferable representations for EM analysis.
arXiv Detail & Related papers (2023-08-19T05:49:13Z) - Dense Unsupervised Learning for Video Segmentation [49.46930315961636]
We present a novel approach to unsupervised learning for video object segmentation (VOS)
Unlike previous work, our formulation allows to learn dense feature representations directly in a fully convolutional regime.
Our approach exceeds the segmentation accuracy of previous work despite using significantly less training data and compute power.
arXiv Detail & Related papers (2021-11-11T15:15:11Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.