Molecular dynamics without molecules: searching the conformational space
of proteins with generative neural networks
- URL: http://arxiv.org/abs/2206.04683v1
- Date: Thu, 9 Jun 2022 02:06:43 GMT
- Title: Molecular dynamics without molecules: searching the conformational space
of proteins with generative neural networks
- Authors: Gregory Schwing, Luigi L. Palese, Ariel Fern\'andez, Loren Schwiebert,
Domenico L. Gatti
- Abstract summary: All-atom and coarse-grained molecular dynamics are widely used to study the conformational states of proteins.
All-atom and coarse-grained simulation methods suffer from the fact that without access to supercomputing resources, the time and length scales at which these states become detectable are difficult to achieve.
One alternative is based on encoding the atomistic trajectory of molecular dynamics as a shorthand version of physical particles, and then learning to propagate the encoded trajectory through the use of artificial intelligence vectors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: All-atom and coarse-grained molecular dynamics are two widely used
computational tools to study the conformational states of proteins. Yet, these
two simulation methods suffer from the fact that without access to
supercomputing resources, the time and length scales at which these states
become detectable are difficult to achieve. One alternative to such methods is
based on encoding the atomistic trajectory of molecular dynamics as a shorthand
version devoid of physical particles, and then learning to propagate the
encoded trajectory through the use of artificial intelligence. Here we show
that a simple textual representation of the frames of molecular dynamics
trajectories as vectors of Ramachandran basin classes retains most of the
structural information of the full atomistic representation of a protein in
each frame, and can be used to generate equivalent atom-less trajectories
suitable to train different types of generative neural networks. In turn, the
trained generative models can be used to extend indefinitely the atom-less
dynamics or to sample the conformational space of proteins from their
representation in the models latent space. We define intuitively this
methodology as molecular dynamics without molecules, and show that it enables
to cover physically relevant states of proteins that are difficult to access
with traditional molecular dynamics.
Related papers
- Towards Predicting Equilibrium Distributions for Molecular Systems with
Deep Learning [60.02391969049972]
We introduce a novel deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems.
DiG employs deep neural networks to transform a simple distribution towards the equilibrium distribution, conditioned on a descriptor of a molecular system.
arXiv Detail & Related papers (2023-06-08T17:12:08Z) - MUDiff: Unified Diffusion for Complete Molecule Generation [104.7021929437504]
We present a new model for generating a comprehensive representation of molecules, including atom features, 2D discrete molecule structures, and 3D continuous molecule coordinates.
We propose a novel graph transformer architecture to denoise the diffusion process.
Our model is a promising approach for designing stable and diverse molecules and can be applied to a wide range of tasks in molecular modeling.
arXiv Detail & Related papers (2023-04-28T04:25:57Z) - Modeling Molecular Structures with Intrinsic Diffusion Models [2.487445341407889]
This thesis proposes Intrinsic Diffusion Modeling.
It combines diffusion generative models with scientific knowledge about the flexibility of biological complexes.
We demonstrate the effectiveness of this approach on two fundamental tasks at the basis of computational chemistry and biology.
arXiv Detail & Related papers (2023-02-23T03:26:48Z) - Two for One: Diffusion Models and Force Fields for Coarse-Grained
Molecular Dynamics [15.660348943139711]
We leverage connections between score-based generative models, force fields and molecular dynamics to learn a CG force field without requiring any force inputs during training.
While having a vastly simplified training setup compared to previous work, we demonstrate that our approach leads to improved performance across several small- to medium-sized protein simulations.
arXiv Detail & Related papers (2023-02-01T17:09:46Z) - DiffBP: Generative Diffusion of 3D Molecules for Target Protein Binding [51.970607704953096]
Previous works usually generate atoms in an auto-regressive way, where element types and 3D coordinates of atoms are generated one by one.
In real-world molecular systems, the interactions among atoms in an entire molecule are global, leading to the energy function pair-coupled among atoms.
In this work, a generative diffusion model for molecular 3D structures based on target proteins is established, at a full-atom level in a non-autoregressive way.
arXiv Detail & Related papers (2022-11-21T07:02:15Z) - A Molecular Multimodal Foundation Model Associating Molecule Graphs with
Natural Language [63.60376252491507]
We propose a molecular multimodal foundation model which is pretrained from molecular graphs and their semantically related textual data.
We believe that our model would have a broad impact on AI-empowered fields across disciplines such as biology, chemistry, materials, environment, and medicine.
arXiv Detail & Related papers (2022-09-12T00:56:57Z) - Scalable Fragment-Based 3D Molecular Design with Reinforcement Learning [68.8204255655161]
We introduce a novel framework for scalable 3D design that uses a hierarchical agent to build molecules.
In a variety of experiments, we show that our agent, guided only by energy considerations, can efficiently learn to produce molecules with over 100 atoms.
arXiv Detail & Related papers (2022-02-01T18:54:24Z) - Molecular CT: Unifying Geometry and Representation Learning for
Molecules at Different Scales [3.987395340580183]
A new deep neural network architecture, Molecular Configuration Transformer ( Molecular CT), is introduced for this purpose.
The computational efficiency and universality make Molecular CT versatile for a variety of molecular learning scenarios.
As examples, we show that Molecular CT enables representational learning for molecular systems at different scales, and achieves comparable or improved results on common benchmarks.
arXiv Detail & Related papers (2020-12-22T03:41:16Z) - Learning Latent Space Energy-Based Prior Model for Molecule Generation [59.875533935578375]
We learn latent space energy-based prior model with SMILES representation for molecule modeling.
Our method is able to generate molecules with validity and uniqueness competitive with state-of-the-art models.
arXiv Detail & Related papers (2020-10-19T09:34:20Z) - Learning a Continuous Representation of 3D Molecular Structures with
Deep Generative Models [0.0]
Generative models are an entirely different approach that learn to represent and optimize molecules in a continuous latent space.
We describe deep generative models of three dimensional molecular structures using atomic density grids.
We are also able to sample diverse sets of molecules based on a given input compound to increase the probability of creating valid, drug-like molecules.
arXiv Detail & Related papers (2020-10-17T01:15:47Z) - End-to-End Differentiable Molecular Mechanics Force Field Construction [0.5269923665485903]
We propose an alternative approach that uses graph neural networks to perceive chemical environments.
The entire process is modular and end-to-end differentiable with respect to model parameters.
We show that this approach is not only sufficiently to reproduce legacy atom types, but that it can learn to accurately reproduce and extend existing molecular mechanics force fields.
arXiv Detail & Related papers (2020-10-02T20:59:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.