Solving deep-learning density functional theory via variational autoencoders
- URL: http://arxiv.org/abs/2403.09788v1
- Date: Thu, 14 Mar 2024 18:11:50 GMT
- Title: Solving deep-learning density functional theory via variational autoencoders
- Authors: Emanuele Costa, Giuseppe Scriva, Sebastiano Pilati,
- Abstract summary: In recent years, machine learning models have revealed suited to learn accurate energy-density functionals from data.
In this article, we employ variational autoencoders to build a compressed, flexible, and regular representation of the ground-state density profiles of various quantum models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, machine learning models, chiefly deep neural networks, have revealed suited to learn accurate energy-density functionals from data. However, problematic instabilities have been shown to occur in the search of ground-state density profiles via energy minimization. Indeed, any small noise can lead astray from realistic profiles, causing the failure of the learned functional and, hence, strong violations of the variational property. In this article, we employ variational autoencoders to build a compressed, flexible, and regular representation of the ground-state density profiles of various quantum models. Performing energy minimization in this compressed space allows us to avoid both numerical instabilities and variational biases due to excessive constraints. Our tests are performed on one-dimensional single-particle models from the literature in the field and, notably, on a three-dimensional disordered potential. In all cases, the ground-state energies are estimated with errors below the chemical accuracy and the density profiles are accurately reproduced without numerical artifacts.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density [93.32594873253534]
Trustworthy machine learning requires meticulous regulation of model reliance on non-robust features.
We propose a framework to delineate and regulate such features by attributing model predictions to the input.
arXiv Detail & Related papers (2024-07-05T09:16:56Z) - Physics-Enhanced Multi-fidelity Learning for Optical Surface Imprint [1.0878040851638]
We propose a novel method to use multi-fidelity neural networks (MFNN) to solve this inverse problem.
We build up the NN model via pure simulation data, and then bridge the sim-to-real gap via transfer learning.
Considering the difficulty of collecting real experimental data, we use NN to dig out the unknown physics and also implant the known physics into the transfer learning framework.
arXiv Detail & Related papers (2023-11-17T01:55:15Z) - Accurate machine learning force fields via experimental and simulation
data fusion [0.0]
Machine Learning (ML)-based force fields are attracting ever-increasing interest due to their capacity to span scales of classical interatomic potentials at quantum-level accuracy.
Here we leverage both Density Functional Theory (DFT) calculations and experimentally measured mechanical properties and lattice parameters to train an ML potential of titanium.
We demonstrate that the fused data learning strategy can concurrently satisfy all target objectives, thus resulting in a molecular model of higher accuracy compared to the models trained with a single source data.
arXiv Detail & Related papers (2023-08-17T18:22:19Z) - Investigation of the Robustness of Neural Density Fields [7.67602635520562]
This work investigates neural density fields and their relative errors in the context of robustness to external factors like noise or constraints during training.
It is found that both models trained on a polyhedral and mascon ground truth perform similarly, indicating that the ground truth is not the accuracy bottleneck.
arXiv Detail & Related papers (2023-05-31T09:43:49Z) - End-To-End Latent Variational Diffusion Models for Inverse Problems in
High Energy Physics [61.44793171735013]
We introduce a novel unified architecture, termed latent variation models, which combines the latent learning of cutting-edge generative art approaches with an end-to-end variational framework.
Our unified approach achieves a distribution-free distance to the truth of over 20 times less than non-latent state-of-the-art baseline.
arXiv Detail & Related papers (2023-05-17T17:43:10Z) - KineticNet: Deep learning a transferable kinetic energy functional for
orbital-free density functional theory [13.437597619451568]
KineticNet is an equivariant deep neural network architecture based on point convolutions adapted to the prediction of quantities on molecular quadrature grids.
For the first time, chemical accuracy of the learned functionals is achieved across input densities and geometries of tiny molecules.
arXiv Detail & Related papers (2023-05-08T17:43:31Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - A multiconfigurational study of the negatively charged nitrogen-vacancy
center in diamond [55.58269472099399]
Deep defects in wide band gap semiconductors have emerged as leading qubit candidates for realizing quantum sensing and information applications.
Here we show that unlike single-particle treatments, the multiconfigurational quantum chemistry methods, traditionally reserved for atoms/molecules, accurately describe the many-body characteristics of the electronic states of these defect centers.
arXiv Detail & Related papers (2020-08-24T01:49:54Z) - A Probability Density Theory for Spin-Glass Systems [0.0]
We develop a continuous probability density theory for spin-glass systems with arbitrary dimensions, interactions, and local fields.
We show how our geometrically encodes key physical computational formulation of the spinglass model.
We apply our formalism to a number of spin-glass models including the She-Kirkrrington (SK) model, spins on random ErdHos-R'enyi graphs, and restricted Boltzmann machines.
arXiv Detail & Related papers (2020-01-03T18:43:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.