Enhanced Sampling with Machine Learning: A Review
- URL: http://arxiv.org/abs/2306.09111v2
- Date: Fri, 16 Jun 2023 15:18:23 GMT
- Title: Enhanced Sampling with Machine Learning: A Review
- Authors: Shams Mehdi, Zachary Smith, Lukas Herron, Ziyue Zou and Pratyush
Tiwary
- Abstract summary: Molecular dynamics (MD) enables the study of physical sampling systems with excellent resolution but suffers from severe time-scale limitations.
To address this, enhanced sampling methods have been developed to improve explorationtemporalal space.
In recent years, integration of machine learning (ML) techniques in different domains has shown promise.
This review explores the merging of ML and enhanced MD by presenting different shared viewpoints.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Molecular dynamics (MD) enables the study of physical systems with excellent
spatiotemporal resolution but suffers from severe time-scale limitations. To
address this, enhanced sampling methods have been developed to improve
exploration of configurational space. However, implementing these is
challenging and requires domain expertise. In recent years, integration of
machine learning (ML) techniques in different domains has shown promise,
prompting their adoption in enhanced sampling as well. Although ML is often
employed in various fields primarily due to its data-driven nature, its
integration with enhanced sampling is more natural with many common underlying
synergies. This review explores the merging of ML and enhanced MD by presenting
different shared viewpoints. It offers a comprehensive overview of this rapidly
evolving field, which can be difficult to stay updated on. We highlight
successful strategies like dimensionality reduction, reinforcement learning,
and flow-based methods. Finally, we discuss open problems at the exciting
ML-enhanced MD interface.
Related papers
- Survey on AI-Generated Media Detection: From Non-MLLM to MLLM [51.91311158085973]
Methods for detecting AI-generated media have evolved rapidly.
General-purpose detectors based on MLLMs integrate authenticity verification, explainability, and localization capabilities.
Ethical and security considerations have emerged as critical global concerns.
arXiv Detail & Related papers (2025-02-07T12:18:20Z) - PSMGD: Periodic Stochastic Multi-Gradient Descent for Fast Multi-Objective Optimization [17.131747385975892]
Multi-objective optimization (MOO) lies at the core of many machine learning (ML) applications.
We propose Periodic Multi-Grad Descent (PSMGD) to accelerate MOO.
PSMGD can provide comparable or superior performance state-of-the-art algorithms.
arXiv Detail & Related papers (2024-12-14T20:47:36Z) - RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training [55.54020926284334]
Multimodal Large Language Models (MLLMs) have recently received substantial interest, which shows their emerging potential as general-purpose models for various vision-language tasks.
Retrieval augmentation techniques have proven to be effective plugins for both LLMs and MLLMs.
In this study, we propose multimodal adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training (RA-BLIP), a novel retrieval-augmented framework for various MLLMs.
arXiv Detail & Related papers (2024-10-18T03:45:19Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Multimodal Large Language Models for Bioimage Analysis [39.120941702559726]
Multimodal Large Language Models (MLLMs) exhibit strong emergent capacities, such as understanding, analyzing, reasoning, and generalization.
With these capabilities, MLLMs hold promise to extract intricate information from biological images and data obtained through various modalities.
Development of MLLMs shows increasing promise in serving as intelligent assistants or agents for augmenting human researchers in biology research.
arXiv Detail & Related papers (2024-07-29T08:21:25Z) - MMA-DFER: MultiModal Adaptation of unimodal models for Dynamic Facial Expression Recognition in-the-wild [81.32127423981426]
Multimodal emotion recognition based on audio and video data is important for real-world applications.
Recent methods have focused on exploiting advances of self-supervised learning (SSL) for pre-training of strong multimodal encoders.
We propose a different perspective on the problem and investigate the advancement of multimodal DFER performance by adapting SSL-pre-trained disjoint unimodal encoders.
arXiv Detail & Related papers (2024-04-13T13:39:26Z) - Structured Pruning of Neural Networks for Constraints Learning [5.689013857168641]
We show the effectiveness of pruning, one of these techniques, when applied to ANNs prior to their integration into MIPs.
We conduct experiments using feed-forward neural networks with multiple layers to construct adversarial examples.
Our results demonstrate that pruning offers remarkable reductions in solution times without hindering the quality of the final decision.
arXiv Detail & Related papers (2023-07-14T16:36:49Z) - A Survey on Learnable Evolutionary Algorithms for Scalable
Multiobjective Optimization [0.0]
Multiobjective evolutionary algorithms (MOEAs) have been adopted to solve various multiobjective optimization problems (MOPs)
However, these progressively improved MOEAs have not necessarily been equipped with sophisticatedly scalable and learnable problem-solving strategies.
Under different scenarios, it requires divergent thinking to design new powerful MOEAs for solving them effectively.
Research into learnable MOEAs that arm themselves with machine learning techniques for scaling-up MOPs has received extensive attention in the field of evolutionary computation.
arXiv Detail & Related papers (2022-06-23T08:16:01Z) - Deep Learning in Multimodal Remote Sensing Data Fusion: A Comprehensive
Review [33.40031994803646]
This survey aims to present a systematic overview in DL-based multimodal RS data fusion.
Sub-fields in the multimodal RS data fusion are reviewed in terms of to-be-fused data modalities.
The remaining challenges and potential future directions are highlighted.
arXiv Detail & Related papers (2022-05-03T09:08:16Z) - MAML is a Noisy Contrastive Learner [72.04430033118426]
Model-agnostic meta-learning (MAML) is one of the most popular and widely-adopted meta-learning algorithms nowadays.
We provide a new perspective to the working mechanism of MAML and discover that: MAML is analogous to a meta-learner using a supervised contrastive objective function.
We propose a simple but effective technique, zeroing trick, to alleviate such interference.
arXiv Detail & Related papers (2021-06-29T12:52:26Z) - Integrating Expert ODEs into Neural ODEs: Pharmacology and Disease
Progression [71.7560927415706]
latent hybridisation model (LHM) integrates a system of expert-designed ODEs with machine-learned Neural ODEs to fully describe the dynamics of the system.
We evaluate LHM on synthetic data as well as real-world intensive care data of COVID-19 patients.
arXiv Detail & Related papers (2021-06-05T11:42:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.