Dynamic compensation of stray electric fields in an ion trap using
machine learning and adaptive algorithm
- URL: http://arxiv.org/abs/2102.05830v1
- Date: Thu, 11 Feb 2021 03:27:31 GMT
- Title: Dynamic compensation of stray electric fields in an ion trap using
machine learning and adaptive algorithm
- Authors: Moji Ghadimi, Alexander Zappacosta, Jordan Scarabel, Kenji Shimizu,
Erik W Streed and Mirko Lobino
- Abstract summary: Surface ion traps are among the most promising technologies for scaling up quantum computing machines.
Here we demonstrate the compensation of stray electric fields using a gradient descent algorithm and a machine learning technique.
- Score: 55.41644538483948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Surface ion traps are among the most promising technologies for scaling up
quantum computing machines, but their complicated multi-electrode geometry can
make some tasks, including compensation for stray electric fields, challenging
both at the level of modeling and of practical implementation. Here we
demonstrate the compensation of stray electric fields using a gradient descent
algorithm and a machine learning technique, which trained a deep learning
network. We show automated dynamical compensation tested against induced
electric charging from UV laser light hitting the chip trap surface. The
results show improvement in compensation using gradient descent and the machine
learner over manual compensation. This improvement is inferred from an increase
of the fluorescence rate of 78% and 96% respectively, for a trapped
$^{171}$Yb$^+$ ion driven by a laser tuned to -7.8 MHz of the
$^2$S$_{1/2}\leftrightarrow^2$P$_{1/2}$ Doppler cooling transition at 369.5 nm.
Related papers
- High-performance in-vacuum optical system for quantum optics experiments in a Penning-trap [0.0]
We present a new in-vacuum optical system designed to detecting 397-nm fluorescence photons from individual calcium ions and Coulomb crystals.
The system has been characterized using a single laser-cooled ion as a point-like source, reaching a final resolution of 3.69(3) $mu$m.
arXiv Detail & Related papers (2024-06-11T10:57:27Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - Physics-aware Roughness Optimization for Diffractive Optical Neural
Networks [15.397285424104469]
diffractive optical neural networks (DONNs) have shown promising advantages over conventional deep neural networks.
We propose a physics-aware diffractive optical neural network training framework to reduce the performance difference between numerical modeling and practical deployment.
arXiv Detail & Related papers (2023-04-04T03:19:36Z) - Ultra-low Precision Multiplication-free Training for Deep Neural
Networks [20.647925576138807]
In training, the linear layers consume the most energy because of the intense use of energy-consuming full-precision multiplication.
We propose an Adaptive Layer-wise Scaling PoT Quantization (ALS-POTQ) method and a multiplication-Free MAC (MF-MAC) to replace all of the FP32 multiplications.
In our training scheme, all of the above methods do not introduce extra multiplications, so we reduce up to 95.8% of the energy consumption in linear layers during training.
arXiv Detail & Related papers (2023-02-28T10:05:45Z) - Optical Transformers [5.494796517705931]
Large Transformer models could be a good target for optical computing.
optical computers could have a $>8,000times$ energy-efficiency advantage over state-of-the-art digital-electronic processors that achieve 300 fJ/MAC.
arXiv Detail & Related papers (2023-02-20T23:30:23Z) - Trap-Integrated Superconducting Nanowire Single-Photon Detectors with
Improved RF Tolerance for Trapped-Ion Qubit State Readout [0.0]
State readout of trapped-ion qubits with trap-integrated detectors can address important challenges for scalable quantum computing.
We report on NbTiN superconducting nanowire single-photon detectors (SNSPDs) employing grounded aluminum mirrors as electrical shielding.
This performance should be sufficient to enable parallel high-fidelity state readout of a wide range of trapped ion species in typical cryogenic apparatus.
arXiv Detail & Related papers (2023-02-02T23:22:39Z) - NeRF in the Palm of Your Hand: Corrective Augmentation for Robotics via
Novel-View Synthesis [50.93065653283523]
SPARTN (Synthetic Perturbations for Augmenting Robot Trajectories via NeRF) is a fully-offline data augmentation scheme for improving robot policies.
Our approach leverages neural radiance fields (NeRFs) to synthetically inject corrective noise into visual demonstrations.
In a simulated 6-DoF visual grasping benchmark, SPARTN improves success rates by 2.8$times$ over imitation learning without the corrective augmentations.
arXiv Detail & Related papers (2023-01-18T23:25:27Z) - Enhancing the Coherence of Superconducting Quantum Bits with Electric
Fields [62.997667081978825]
We show that qubit coherence can be improved by tuning defects away from the qubit resonance using an applied DC-electric field.
We also discuss how local gate electrodes can be implemented in superconducting quantum processors to enable simultaneous in-situ coherence optimization of individual qubits.
arXiv Detail & Related papers (2022-08-02T16:18:30Z) - Single-Shot Optical Neural Network [55.41644538483948]
'Weight-stationary' analog optical and electronic hardware has been proposed to reduce the compute resources required by deep neural networks.
We present a scalable, single-shot-per-layer weight-stationary optical processor.
arXiv Detail & Related papers (2022-05-18T17:49:49Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.