Toward Robust Autotuning of Noisy Quantum Dot Devices
- URL: http://arxiv.org/abs/2108.00043v1
- Date: Fri, 30 Jul 2021 19:02:32 GMT
- Title: Toward Robust Autotuning of Noisy Quantum Dot Devices
- Authors: Joshua Ziegler, Thomas McJunkin, E. S. Joseph, Sandesh S. Kalantre,
Benjamin Harpt, D. E. Savage, M. G. Lagally, M. A. Eriksson, Jacob M. Taylor,
Justyna P. Zwolak
- Abstract summary: Current autotuning approaches for quantum dot (QD) devices lack an assessment of data reliability.
This leads to unexpected failures when noisy data is processed by an autonomous system.
We propose a framework for robust autotuning of QD devices that combines a machine learning (ML) state classifier with a data quality control module.
- Score: 0.10889448277664004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The current autotuning approaches for quantum dot (QD) devices, while showing
some success, lack an assessment of data reliability. This leads to unexpected
failures when noisy data is processed by an autonomous system. In this work, we
propose a framework for robust autotuning of QD devices that combines a machine
learning (ML) state classifier with a data quality control module. The data
quality control module acts as a ``gatekeeper'' system, ensuring that only
reliable data is processed by the state classifier. Lower data quality results
in either device recalibration or termination. To train both ML systems, we
enhance the QD simulation by incorporating synthetic noise typical of QD
experiments. We confirm that the inclusion of synthetic noise in the training
of the state classifier significantly improves the performance, resulting in an
accuracy of 95.1(7) % when tested on experimental data. We then validate the
functionality of the data quality control module by showing the state
classifier performance deteriorates with decreasing data quality, as expected.
Our results establish a robust and flexible ML framework for autonomous tuning
of noisy QD devices.
Related papers
- Human-in-the-loop Reinforcement Learning for Data Quality Monitoring in Particle Physics Experiments [0.0]
We propose a proof-of-concept for applying human-in-the-loop Reinforcement Learning to automate the Data Quality Monitoring process.
We show that random, unbiased noise in human classification can be reduced, leading to an improved accuracy over the baseline.
arXiv Detail & Related papers (2024-05-24T12:52:46Z) - QDA$^2$: A principled approach to automatically annotating charge
stability diagrams [1.2437226707039448]
Gate-defined semiconductor quantum dot (QD) arrays are a promising platform for quantum computing.
Large configuration spaces and inherent noise make tuning of QD devices a nontrivial task.
QD auto-annotator is a classical algorithm for automatic interpretation and labeling of experimentally acquired data.
arXiv Detail & Related papers (2023-12-18T13:52:18Z) - Machine Learning Data Suitability and Performance Testing Using Fault
Injection Testing Framework [0.0]
This paper presents the Fault Injection for Undesirable Learning in input Data (FIUL-Data) testing framework.
Data mutators explore vulnerabilities of ML systems against the effects of different fault injections.
This paper evaluates the framework using data from analytical chemistry, comprising retention time measurements of anti-sense oligonucleotides.
arXiv Detail & Related papers (2023-09-20T12:58:35Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Tuning arrays with rays: Physics-informed tuning of quantum dot charge
states [0.0]
Quantum computers based on gate-defined quantum dots (QDs) are expected to scale.
As the number of qubits increases, the burden of manually calibrating these systems becomes unreasonable.
Here, we demonstrate an intuitive, reliable, and data-efficient set of tools for an automated global state and charge tuning.
arXiv Detail & Related papers (2022-09-08T14:17:49Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - ClusterQ: Semantic Feature Distribution Alignment for Data-Free
Quantization [111.12063632743013]
We propose a new and effective data-free quantization method termed ClusterQ.
To obtain high inter-class separability of semantic features, we cluster and align the feature distribution statistics.
We also incorporate the intra-class variance to solve class-wise mode collapse.
arXiv Detail & Related papers (2022-04-30T06:58:56Z) - Robust Face Anti-Spoofing with Dual Probabilistic Modeling [49.14353429234298]
We propose a unified framework called Dual Probabilistic Modeling (DPM), with two dedicated modules, DPM-LQ (Label Quality aware learning) and DPM-DQ (Data Quality aware learning)
DPM-LQ is able to produce robust feature representations without overfitting to the distribution of noisy semantic labels.
DPM-DQ can eliminate data noise from False Reject' and False Accept' during inference by correcting the prediction confidence of noisy data based on its quality distribution.
arXiv Detail & Related papers (2022-04-27T03:44:18Z) - A Modulation Layer to Increase Neural Network Robustness Against Data
Quality Issues [22.62510395932645]
Data missingness and quality are common problems in machine learning, especially for high-stakes applications such as healthcare.
We propose a novel neural network modification to mitigate the impacts of low quality and missing data.
Our results suggest that explicitly accounting for reduced information quality with a modulating fully connected layer can enable the deployment of artificial intelligence systems in real-time applications.
arXiv Detail & Related papers (2021-07-19T01:29:16Z) - Bridging the Gap Between Clean Data Training and Real-World Inference
for Spoken Language Understanding [76.89426311082927]
Existing models are trained on clean data, which causes a textitgap between clean data training and real-world inference.
We propose a method from the perspective of domain adaptation, by which both high- and low-quality samples are embedding into similar vector space.
Experiments on the widely-used dataset, Snips, and large scale in-house dataset (10 million training examples) demonstrate that this method not only outperforms the baseline models on real-world (noisy) corpus but also enhances the robustness, that is, it produces high-quality results under a noisy environment.
arXiv Detail & Related papers (2021-04-13T17:54:33Z) - How Training Data Impacts Performance in Learning-based Control [67.7875109298865]
This paper derives an analytical relationship between the density of the training data and the control performance.
We formulate a quality measure for the data set, which we refer to as $rho$-gap.
We show how the $rho$-gap can be applied to a feedback linearizing control law.
arXiv Detail & Related papers (2020-05-25T12:13:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.