Bayesian optimization of atomic structures with prior probabilities from universal interatomic potentials
- URL: http://arxiv.org/abs/2408.15590v2
- Date: Mon, 18 Nov 2024 10:17:25 GMT
- Title: Bayesian optimization of atomic structures with prior probabilities from universal interatomic potentials
- Authors: Peder Lyngby, Casper Larsen, Karsten Wedel Jacobsen,
- Abstract summary: optimization of atomic structures plays a pivotal role in understanding and designing materials with desired properties.
Recent advancements in machine learning-driven surrogate models offer a promising avenue for alleviating this computational burden.
We propose a novel approach that combines the strengths of universal machine learning potentials with a Bayesian approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The optimization of atomic structures plays a pivotal role in understanding and designing materials with desired properties. However, conventional computational methods often struggle with the formidable task of navigating the vast potential energy surface, especially in high-dimensional spaces with numerous local minima. Recent advancements in machine learning-driven surrogate models offer a promising avenue for alleviating this computational burden. In this study, we propose a novel approach that combines the strengths of universal machine learning potentials with a Bayesian approach using Gaussian processes. By using the machine learning potentials as priors for the Gaussian process, the Gaussian process has to learn only the difference between the machine learning potential and the target energy surface calculated for example by density functional theory. This turns out to improve the speed by which the global optimal structure is identified across diverse systems for a well-behaved machine learning potential. The approach is tested on periodic bulk materials, surface structures, and a cluster.
Related papers
- Leveraging Discrete Function Decomposability for Scientific Design [48.365465744654365]
In the era of AI-driven science and engineering, we often want to design objects in silico according to user-specified properties.<n>For example, we may wish to design a protein to bind its target, arrange components within a circuit to minimize latency, or find materials with certain properties.<n>We propose and demonstrate use of a new distributional optimization algorithm, De-Aware Distributional Optimization (DADO), that can leverage any decomposability defined by a junction tree on the design variables.
arXiv Detail & Related papers (2025-11-04T21:57:51Z) - Self-Optimizing Machine Learning Potential Assisted Automated Workflow for Highly Efficient Complex Systems Material Design [12.596168538414512]
We propose an automated crystal structure prediction framework built upon the attention-coupled neural networks potential.<n>The generalizability of the potential is achieved by sampling regions across the local minima of the potential energy surface.<n>The workflow is validated on Mg-Ca-H ternary and Be-P-N-O quaternary systems by exploring nearly 10 million configurations.
arXiv Detail & Related papers (2025-05-13T01:34:34Z) - Erwin: A Tree-based Hierarchical Transformer for Large-scale Physical Systems [48.984420422430404]
We present Erwin, a hierarchical transformer inspired by methods from computational many-body physics.<n>We demonstrate Erwin's effectiveness across multiple domains, including cosmology, molecular dynamics, PDE solving, and particle fluid dynamics.
arXiv Detail & Related papers (2025-02-24T10:16:55Z) - Deep Generalized Schrödinger Bridges: From Image Generation to Solving Mean-Field Games [29.570545100557215]
Generalized Schr"odinger Bridges (GSBs) are a mathematical framework used to analyze the most likely particle evolution.
This paper focuses on an algorithmic perspective, aiming to enhance practical usage.
arXiv Detail & Related papers (2024-12-28T21:31:53Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.
deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.
This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Neutron-nucleus dynamics simulations for quantum computers [49.369935809497214]
We develop a novel quantum algorithm for neutron-nucleus simulations with general potentials.
It provides acceptable bound-state energies even in the presence of noise, through the noise-resilient training method.
We introduce a new commutativity scheme called distance-grouped commutativity (DGC) and compare its performance with the well-known qubit-commutativity scheme.
arXiv Detail & Related papers (2024-02-22T16:33:48Z) - Symmetry-invariant quantum machine learning force fields [0.0]
We design quantum neural networks that explicitly incorporate, as a data-inspired prior, an extensive set of physically relevant symmetries.
Our results suggest that molecular force fields generation can significantly profit from leveraging the framework of geometric quantum machine learning.
arXiv Detail & Related papers (2023-11-19T16:15:53Z) - Advances in machine-learning-based sampling motivated by lattice quantum
chromodynamics [4.539861642583362]
This Perspective outlines the advances in ML-based sampling motivated by lattice quantum field theory.
The design of ML algorithms for this application faces profound challenges, including the necessity of scaling custom ML architectures to the largest supercomputers.
If this approach can realize its early promise it will be a transformative step towards first-principles physics calculations in particle, nuclear and condensed matter physics.
arXiv Detail & Related papers (2023-09-03T12:25:59Z) - Higher-order topological kernels via quantum computation [68.8204255655161]
Topological data analysis (TDA) has emerged as a powerful tool for extracting meaningful insights from complex data.
We propose a quantum approach to defining Betti kernels, which is based on constructing Betti curves with increasing order.
arXiv Detail & Related papers (2023-07-14T14:48:52Z) - Deep Kernel Methods Learn Better: From Cards to Process Optimization [0.7587345054583298]
We show that DKL with active learning can produce a more compact and smooth latent space.
We demonstrate this behavior using a simple cards data set and extend it to the optimization of domain-generated trajectories in physical systems.
arXiv Detail & Related papers (2023-03-25T20:21:29Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - GENEOnet: A new machine learning paradigm based on Group Equivariant
Non-Expansive Operators. An application to protein pocket detection [97.5153823429076]
We introduce a new computational paradigm based on Group Equivariant Non-Expansive Operators.
We test our method, called GENEOnet, on a key problem in drug design: detecting pockets on the surface of proteins that can host.
arXiv Detail & Related papers (2022-01-31T11:14:51Z) - Gaussian Moments as Physically Inspired Molecular Descriptors for
Accurate and Scalable Machine Learning Potentials [0.0]
We propose a machine learning method for constructing high-dimensional potential energy surfaces based on feed-forward neural networks.
The accuracy of the developed approach in representing both chemical and configurational spaces is comparable to the one of several established machine learning models.
arXiv Detail & Related papers (2021-09-15T16:46:46Z) - Efficient construction of tensor-network representations of many-body
Gaussian states [59.94347858883343]
We present a procedure to construct tensor-network representations of many-body Gaussian states efficiently and with a controllable error.
These states include the ground and thermal states of bosonic and fermionic quadratic Hamiltonians, which are essential in the study of quantum many-body systems.
arXiv Detail & Related papers (2020-08-12T11:30:23Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Mat\'ern Gaussian processes on Riemannian manifolds [81.15349473870816]
We show how to generalize the widely-used Mat'ern class of Gaussian processes.
We also extend the generalization from the Mat'ern to the widely-used squared exponential process.
arXiv Detail & Related papers (2020-06-17T21:05:42Z) - Machine Learning Enabled Discovery of Application Dependent Design
Principles for Two-dimensional Materials [1.1470070927586016]
We train an ensemble of models to predict thermodynamic, mechanical, and electronic properties.
We carry out a screening of nearly 45,000 structures for two largely disjoint applications.
We find that hybrid organic-inorganic perovskites with lead and tin tend to be good candidates for solar cell applications.
arXiv Detail & Related papers (2020-03-19T23:13:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.