Gradual Optimization Learning for Conformational Energy Minimization
- URL: http://arxiv.org/abs/2311.06295v2
- Date: Tue, 12 Mar 2024 07:36:05 GMT
- Title: Gradual Optimization Learning for Conformational Energy Minimization
- Authors: Artem Tsypin, Leonid Ugadiarov, Kuzma Khrabrov, Alexander Telepov,
Egor Rumiantsev, Alexey Skrynnik, Aleksandr I. Panov, Dmitry Vetrov, Elena
Tutubalina and Artur Kadurin
- Abstract summary: Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
- Score: 69.36925478047682
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Molecular conformation optimization is crucial to computer-aided drug
discovery and materials design. Traditional energy minimization techniques rely
on iterative optimization methods that use molecular forces calculated by a
physical simulator (oracle) as anti-gradients. However, this is a
computationally expensive approach that requires many interactions with a
physical simulator. One way to accelerate this procedure is to replace the
physical simulator with a neural network. Despite recent progress in neural
networks for molecular conformation energy prediction, such models are prone to
distribution shift, leading to inaccurate energy minimization. We find that the
quality of energy minimization with neural networks can be improved by
providing optimization trajectories as additional training data. Still, it
takes around $5 \times 10^5$ additional conformations to match the physical
simulator's optimization quality. In this work, we present the Gradual
Optimization Learning Framework (GOLF) for energy minimization with neural
networks that significantly reduces the required additional data. The framework
consists of an efficient data-collecting scheme and an external optimizer. The
external optimizer utilizes gradients from the energy prediction model to
generate optimization trajectories, and the data-collecting scheme selects
additional training data to be processed by the physical simulator. Our results
demonstrate that the neural network trained with GOLF performs on par with the
oracle on a benchmark of diverse drug-like molecules using $50$x less
additional data.
Related papers
- Metamizer: a versatile neural optimizer for fast and accurate physics simulations [4.717325308876749]
We introduce Metamizer, a novel neural network that iteratively solves a wide range of physical systems with high accuracy.
We demonstrate that Metamizer achieves unprecedented accuracy for deep learning based approaches.
Our results suggest that Metamizer could have a profound impact on future numerical solvers.
arXiv Detail & Related papers (2024-10-10T11:54:31Z) - Sparks of Quantum Advantage and Rapid Retraining in Machine Learning [0.0]
In this study, we optimize a powerful neural network architecture for representing complex functions with minimal parameters.
We introduce rapid retraining capability, enabling the network to be retrained with new data without reprocessing old samples.
Our findings suggest that with further advancements in quantum hardware and algorithm optimization, quantum-optimized machine learning models could have broad applications.
arXiv Detail & Related papers (2024-07-22T19:55:44Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Physics Informed Piecewise Linear Neural Networks for Process
Optimization [0.0]
It is proposed to upgrade piece-wise linear neural network models with physics informed knowledge for optimization problems with neural network models embedded.
For all cases, physics-informed trained neural network based optimal results are closer to global optimality.
arXiv Detail & Related papers (2023-02-02T10:14:54Z) - HOAX: A Hyperparameter Optimization Algorithm Explorer for Neural
Networks [0.0]
The bottleneck for trajectory-based methods to study photoinduced processes is still the huge number of electronic structure calculations.
We present an innovative solution, in which the amount of electronic structure calculations is drastically reduced, by employing machine learning algorithms and methods borrowed from the realm of artificial intelligence.
arXiv Detail & Related papers (2023-02-01T11:12:35Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Enhanced data efficiency using deep neural networks and Gaussian
processes for aerodynamic design optimization [0.0]
Adjoint-based optimization methods are attractive for aerodynamic shape design.
They can become prohibitively expensive when multiple optimization problems are being solved.
We propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver.
arXiv Detail & Related papers (2020-08-15T15:09:21Z) - Optimizing Memory Placement using Evolutionary Graph Reinforcement
Learning [56.83172249278467]
We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces.
We train and validate our approach directly on the Intel NNP-I chip for inference.
We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads.
arXiv Detail & Related papers (2020-07-14T18:50:12Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.