Machine-learning Kondo physics using variational autoencoders
- URL: http://arxiv.org/abs/2107.08013v1
- Date: Fri, 16 Jul 2021 17:03:58 GMT
- Title: Machine-learning Kondo physics using variational autoencoders
- Authors: Cole Miles, Matthew R. Carbone, Erica J. Sturm, Deyu Lu, Andreas
Weichselbaum, Kipton Barros, and Robert M. Konik
- Abstract summary: We employ variational autoencoders to extract insight from a dataset of one-particle Anderson impurity model spectral functions.
We find that the learned latent space components strongly correlate with well known, but nontrivial, parameters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We employ variational autoencoders to extract physical insight from a dataset
of one-particle Anderson impurity model spectral functions. Autoencoders are
trained to find a low-dimensional, latent space representation that faithfully
characterizes each element of the training set, as measured by a reconstruction
error. Variational autoencoders, a probabilistic generalization of standard
autoencoders, further condition the learned latent space to promote highly
interpretable features. In our study, we find that the learned latent space
components strongly correlate with well known, but nontrivial, parameters that
characterize emergent behaviors in the Anderson impurity model. In particular,
one latent space component correlates with particle-hole asymmetry, while
another is in near one-to-one correspondence with the Kondo temperature, a
dynamically generated low-energy scale in the impurity model. With symbolic
regression, we model this component as a function of bare physical input
parameters and "rediscover" the non-perturbative formula for the Kondo
temperature. The machine learning pipeline we develop opens opportunities to
discover new domain knowledge in other physical systems.
Related papers
- Adversarial Disentanglement by Backpropagation with Physics-Informed Variational Autoencoder [0.0]
Inference and prediction under partial knowledge of a physical system is challenging.<n>We propose a physics-informed variational autoencoder architecture that combines the interpretability of physics-based models with the flexibility of data-driven models.
arXiv Detail & Related papers (2025-06-16T16:18:25Z) - Heterogeneous quantization regularizes spiking neural network activity [0.0]
We present a data-blind neuromorphic signal conditioning strategy whereby analog data are normalized and quantized into spike phase representations.
We extend this mechanism by adding a data-aware calibration step whereby the range and density of the quantization weights adapt to accumulated input statistics.
arXiv Detail & Related papers (2024-09-27T02:25:44Z) - DiffuseBot: Breeding Soft Robots With Physics-Augmented Generative
Diffusion Models [102.13968267347553]
We present DiffuseBot, a physics-augmented diffusion model that generates soft robot morphologies capable of excelling in a wide spectrum of tasks.
We showcase a range of simulated and fabricated robots along with their capabilities.
arXiv Detail & Related papers (2023-11-28T18:58:48Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Variational Autoencoding Neural Operators [17.812064311297117]
Unsupervised learning with functional data is an emerging paradigm of machine learning research with applications to computer vision, climate modeling and physical systems.
We present Variational Autoencoding Neural Operators (VANO), a general strategy for making a large class of operator learning architectures act as variational autoencoders.
arXiv Detail & Related papers (2023-02-20T22:34:43Z) - Physics-informed Variational Autoencoders for Improved Robustness to Environmental Factors of Variation [0.6384650391969042]
p$3$VAE is a variational autoencoder that integrates prior physical knowledge about the latent factors of variation related to the data acquisition conditions.
We introduce a semi-supervised learning algorithm that strikes a balance between the machine learning part and the physics part.
arXiv Detail & Related papers (2022-10-19T09:32:15Z) - Physics-informed machine learning with differentiable programming for
heterogeneous underground reservoir pressure management [64.17887333976593]
Avoiding over-pressurization in subsurface reservoirs is critical for applications like CO2 sequestration and wastewater injection.
Managing the pressures by controlling injection/extraction are challenging because of complex heterogeneity in the subsurface.
We use differentiable programming with a full-physics model and machine learning to determine the fluid extraction rates that prevent over-pressurization.
arXiv Detail & Related papers (2022-06-21T20:38:13Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Analytical Modelling of Exoplanet Transit Specroscopy with Dimensional
Analysis and Symbolic Regression [68.8204255655161]
The deep learning revolution has opened the door for deriving such analytical results directly with a computer algorithm fitting to the data.
We successfully demonstrate the use of symbolic regression on synthetic data for the transit radii of generic hot Jupiter exoplanets.
As a preprocessing step, we use dimensional analysis to identify the relevant dimensionless combinations of variables.
arXiv Detail & Related papers (2021-12-22T00:52:56Z) - Model discovery in the sparse sampling regime [0.0]
We show how deep learning can improve model discovery of partial differential equations.
As a result, deep learning-based model discovery allows to recover the underlying equations.
We illustrate our claims on both synthetic and experimental sets.
arXiv Detail & Related papers (2021-05-02T06:27:05Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Identification of state functions by physically-guided neural networks
with physically-meaningful internal layers [0.0]
We use the concept of physically-constrained neural networks (PCNN) to predict the input-output relation in a physical system.
We show that this approach, besides getting physically-based predictions, accelerates the training process.
arXiv Detail & Related papers (2020-11-17T11:26:37Z) - Characterizing the Latent Space of Molecular Deep Generative Models with
Persistent Homology Metrics [21.95240820041655]
Variational Autos (VAEs) are generative models in which encoder-decoder network pairs are trained to reconstruct training data distributions.
We propose a method for measuring how well the latent space of deep generative models is able to encode structural and chemical features.
arXiv Detail & Related papers (2020-10-18T13:33:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.