AUGUR, A flexible and efficient optimization algorithm for identification of optimal adsorption sites
- URL: http://arxiv.org/abs/2409.16204v1
- Date: Tue, 24 Sep 2024 16:03:01 GMT
- Title: AUGUR, A flexible and efficient optimization algorithm for identification of optimal adsorption sites
- Authors: Ioannis Kouroudis, Poonam, Neel Misciaci, Felix Mayr, Leon Müller, Zhaosu Gu, Alessio Gagliardi,
- Abstract summary: Our model combines graph neural networks and Gaussian processes to create a flexible, efficient, symmetry-aware, translation, and rotation-invariant predictor.
It determines the optimal position of large and complicated clusters with far fewer iterations than current state-of-the-art approaches.
It does not rely on hand-crafted features and can be seamlessly employed on any molecule without any alterations.
- Score: 0.4188114563181615
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we propose a novel flexible optimization pipeline for determining the optimal adsorption sites, named AUGUR (Aware of Uncertainty Graph Unit Regression). Our model combines graph neural networks and Gaussian processes to create a flexible, efficient, symmetry-aware, translation, and rotation-invariant predictor with inbuilt uncertainty quantification. This predictor is then used as a surrogate for a data-efficient Bayesian Optimization scheme to determine the optimal adsorption positions. This pipeline determines the optimal position of large and complicated clusters with far fewer iterations than current state-of-the-art approaches. Further, it does not rely on hand-crafted features and can be seamlessly employed on any molecule without any alterations. Additionally, the pooling properties of graphs allow for the processing of molecules of different sizes by the same model. This allows the energy prediction of computationally demanding systems by a model trained on comparatively smaller and less expensive ones
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Using Deep Learning to Design High Aspect Ratio Fusion Devices [0.0]
We train a machine learning model to construct configurations with favorable confinement properties.
It is shown that optimized configurations can be generated reliably using this method.
arXiv Detail & Related papers (2024-08-31T23:28:10Z) - Fine-Tuning Adaptive Stochastic Optimizers: Determining the Optimal Hyperparameter $ε$ via Gradient Magnitude Histogram Analysis [0.7366405857677226]
We introduce a new framework based on the empirical probability density function of the loss's magnitude, termed the "gradient magnitude histogram"
We propose a novel algorithm using gradient magnitude histograms to automatically estimate a refined and accurate search space for the optimal safeguard.
arXiv Detail & Related papers (2023-11-20T04:34:19Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Optimization-Induced Graph Implicit Nonlinear Diffusion [64.39772634635273]
We propose a new kind of graph convolution variants, called Graph Implicit Diffusion (GIND)
GIND implicitly has access to infinite hops of neighbors while adaptively aggregating features with nonlinear diffusion to prevent over-smoothing.
We show that the learned representation can be formalized as the minimizer of an explicit convex optimization objective.
arXiv Detail & Related papers (2022-06-29T06:26:42Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.