Machine learning identification of symmetrized base states of Rydberg
atoms
- URL: http://arxiv.org/abs/2107.13745v1
- Date: Thu, 29 Jul 2021 04:45:13 GMT
- Title: Machine learning identification of symmetrized base states of Rydberg
atoms
- Authors: Daryl Ryan Chong, Minhyuk Kim, Jaewook Ahn, Heejeong Jeong
- Abstract summary: We use machine learning (ML) models to identify the base states of interacting Rydberg atoms of various atom numbers.
We achieve high accuracy of up to 100% for data sets containing only a few hundred samples.
- Score: 0.8258451067861933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Studying the complex quantum dynamics of interacting many-body systems is one
of the most challenging areas in modern physics. Here, we use machine learning
(ML) models to identify the symmetrized base states of interacting Rydberg
atoms of various atom numbers (up to six) and geometric configurations. To
obtain the data set for training the ML classifiers, we generate Rydberg
excitation probability profiles that simulate experimental data by utilizing
Lindblad equations that incorporate laser intensities and phase noise. Then, we
classify the data sets using support vector machines (SVMs) and random forest
classifiers (RFCs). With these ML models, we achieve high accuracy of up to
100% for data sets containing only a few hundred samples, especially for the
closed atom configurations such as the pentagonal (five atoms) and hexagonal
(six atoms) systems. The results demonstrate that computationally
cost-effective ML models can be used in the identification of Rydberg atom
configurations.
Related papers
- Quantum network tomography of Rydberg arrays by machine learning [0.0]
Rydberg atoms are a versatile platform for quantum computation and quantum simulation, also allowing controllable decoherence.
We demonstrate theoretically, that they also enable proof-of-principle demonstrations for a technique to build models for open quantum dynamics by machine learning with artificial neural networks.
arXiv Detail & Related papers (2024-12-07T20:41:34Z) - Particle identification with machine learning from incomplete data in the ALICE experiment [3.046689922445082]
ALICE provides PID information via several detectors for particles with momentum from about 100 MeV/c up to 20 GeV/c.
A much better performance can be achieved with machine learning (ML) methods.
We present the integration of the ML project with the ALICE analysis software, and we discuss domain adaptation.
arXiv Detail & Related papers (2024-03-26T07:05:06Z) - On the Sample Complexity of Quantum Boltzmann Machine Learning [0.0]
We give an operational definition of QBM learning in terms of the difference in expectation values between the model and target.
We prove that a solution can be obtained with gradient descent using at most a number of Gibbs states.
In particular, we give pre-training strategies based on mean-field, Gaussian Fermionic, and geometrically local Hamiltonians.
arXiv Detail & Related papers (2023-06-26T18:00:50Z) - QH9: A Quantum Hamiltonian Prediction Benchmark for QM9 Molecules [69.25826391912368]
We generate a new Quantum Hamiltonian dataset, named as QH9, to provide precise Hamiltonian matrices for 999 or 2998 molecular dynamics trajectories.
We show that current machine learning models have the capacity to predict Hamiltonian matrices for arbitrary molecules.
arXiv Detail & Related papers (2023-06-15T23:39:07Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Synthetic data enable experiments in atomistic machine learning [0.0]
We demonstrate the use of a large dataset labelled with per-atom energies from an existing ML potential model.
The cheapness of this process, compared to the quantum-mechanical ground truth, allows us to generate millions of datapoints.
We show that learning synthetic data labels can be a useful pre-training task for subsequent fine-tuning on small datasets.
arXiv Detail & Related papers (2022-11-29T18:17:24Z) - Disentangling multiple scattering with deep learning: application to
strain mapping from electron diffraction patterns [48.53244254413104]
We implement a deep neural network called FCU-Net to invert highly nonlinear electron diffraction patterns into quantitative structure factor images.
We trained the FCU-Net using over 200,000 unique dynamical diffraction patterns which include many different combinations of crystal structures.
Our simulated diffraction pattern library, implementation of FCU-Net, and trained model weights are freely available in open source repositories.
arXiv Detail & Related papers (2022-02-01T03:53:39Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Prediction of liquid fuel properties using machine learning models with
Gaussian processes and probabilistic conditional generative learning [56.67751936864119]
The present work aims to construct cheap-to-compute machine learning (ML) models to act as closure equations for predicting the physical properties of alternative fuels.
Those models can be trained using the database from MD simulations and/or experimental measurements in a data-fusion-fidelity approach.
The results show that ML models can predict accurately the fuel properties of a wide range of pressure and temperature conditions.
arXiv Detail & Related papers (2021-10-18T14:43:50Z) - BIGDML: Towards Exact Machine Learning Force Fields for Materials [55.944221055171276]
Machine-learning force fields (MLFF) should be accurate, computationally and data efficient, and applicable to molecules, materials, and interfaces thereof.
Here, we introduce the Bravais-Inspired Gradient-Domain Machine Learning approach and demonstrate its ability to construct reliable force fields using a training set with just 10-200 atoms.
arXiv Detail & Related papers (2021-06-08T10:14:57Z) - Learning with Density Matrices and Random Features [44.98964870180375]
A density matrix describes the statistical state of a quantum system.
It is a powerful formalism to represent both the quantum and classical uncertainty of quantum systems.
This paper explores how density matrices can be used as a building block for machine learning models.
arXiv Detail & Related papers (2021-02-08T17:54:59Z) - Assembled arrays of Rydberg-interacting atoms [0.0]
We demonstrate the first realization of Rydberg excitations and controlled interactions in microlens-generated multisite trap arrays of reconfigurable geometry.
We characterize the simultaneous coherent excitation of non-interacting atom clusters for the state $mathrm57D_5/2$ and analyze the experimental parameters and limitations.
arXiv Detail & Related papers (2020-08-11T17:18:42Z) - Graph Neural Network for Hamiltonian-Based Material Property Prediction [56.94118357003096]
We present and compare several different graph convolution networks that are able to predict the band gap for inorganic materials.
The models are developed to incorporate two different features: the information of each orbital itself and the interaction between each other.
The results show that our model can get a promising prediction accuracy with cross-validation.
arXiv Detail & Related papers (2020-05-27T13:32:10Z) - Automated discovery of a robust interatomic potential for aluminum [4.6028828826414925]
Machine learning (ML) based potentials aim for faithful emulation of quantum mechanics (QM) calculations at drastically reduced computational cost.
We present a highly automated approach to dataset construction using the principles of active learning (AL)
We demonstrate this approach by building an ML potential for aluminum (ANI-Al)
To demonstrate transferability, we perform a 1.3M atom shock simulation, and show that ANI-Al predictions agree very well with DFT calculations on local atomic environments sampled from the nonequilibrium dynamics.
arXiv Detail & Related papers (2020-03-10T19:06:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.