Quantum Kernel Learning for Small Dataset Modeling in Semiconductor Fabrication: Application to Ohmic Contact
- URL: http://arxiv.org/abs/2409.10803v2
- Date: Mon, 07 Apr 2025 02:57:39 GMT
- Title: Quantum Kernel Learning for Small Dataset Modeling in Semiconductor Fabrication: Application to Ohmic Contact
- Authors: Zeheng Wang, Fangzhou Wang, Liang Li, Zirui Wang, Timothy van der Laan, Ross C. C. Leon, Jing-Kai Huang, Muhammad Usman,
- Abstract summary: We develop the first application of quantum machine learning (QML) to model semiconductor process.<n>We report a quantum kernel-based regressor (SQKR) with a static 2-level ZZ feature map.<n>SQKR consistently outperformed six mainstream CML models across all evaluation metrics.
- Score: 18.42230728589117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex semiconductor fabrication processes, such as Ohmic contact formation in unconventional semiconductor devices, pose significant modeling challenges due to a large number of operational variables and the difficulty of collecting large, high-quality datasets. Classical machine learning (CML) models often struggle in such scenarios, where the data is both high-dimensional and limited in quantity, leading to overfitting and reduced predictive accuracy. To address this challenge, we develop the first application of quantum machine learning (QML) to model this semiconductor process, leveraging quantum systems' capacity to efficiently capture complex correlations in high-dimensional spaces and generalize well with small datasets. Using only 159 experimental samples augmented via a variational autoencoder, we report a quantum kernel-based regressor (SQKR) with a static 2-level ZZ feature map. The SQKR consistently outperformed six mainstream CML models across all evaluation metrics, achieving the lowest mean absolute error (MAE), mean squared error (MSE), and root mean squared error (RMSE), with repeated experiments confirming its robustness. Notably, SQKR achieved an MAE of 0.314 Ohm-mm with data from experimental verification, demonstrating its ability to effectively model semiconductor fabrication processes despite limited data availability. These results highlight QML's unique capability to handle small yet high-dimensional datasets in the semiconductor industry, making it a promising alternative to classical approaches for semiconductor process modeling.
Related papers
- Learning Density Functionals from Noisy Quantum Data [0.0]
noisy intermediate-scale quantum (NISQ) devices are used to generate training data for machine learning (ML) models.
We show that a neural-network ML model can successfully generalize from small datasets subject to noise typical of NISQ algorithms.
Our findings suggest a promising pathway for leveraging NISQ devices in practical quantum simulations.
arXiv Detail & Related papers (2024-09-04T17:59:55Z) - Discrete Randomized Smoothing Meets Quantum Computing [40.54768963869454]
We show how to encode all the perturbations of the input binary data in superposition and use Quantum Amplitude Estimation (QAE) to obtain a quadratic reduction in the number of calls to the model.
In addition, we propose a new binary threat model to allow for an extensive evaluation of our approach on images, graphs, and text.
arXiv Detail & Related papers (2024-08-01T20:21:52Z) - Provably Trainable Rotationally Equivariant Quantum Machine Learning [0.6435156676256051]
We introduce a family of rotationally equivariant QML models built upon the quantum Fourier transform.
We numerically test our models on a dataset of simulated scanning tunnelling microscope images of phosphorus impurities in silicon.
arXiv Detail & Related papers (2023-11-10T05:10:06Z) - Machine Learning for Practical Quantum Error Mitigation [0.0]
We show that machine learning for quantum error mitigation drastically reduces the cost of mitigation.
We propose a path toward scalable mitigation by using ML-QEM to mimic traditional mitigation methods with superior runtime efficiency.
arXiv Detail & Related papers (2023-09-29T16:17:12Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - Pre-training Tensor-Train Networks Facilitates Machine Learning with Variational Quantum Circuits [70.97518416003358]
Variational quantum circuits (VQCs) hold promise for quantum machine learning on noisy intermediate-scale quantum (NISQ) devices.
While tensor-train networks (TTNs) can enhance VQC representation and generalization, the resulting hybrid model, TTN-VQC, faces optimization challenges due to the Polyak-Lojasiewicz (PL) condition.
To mitigate this challenge, we introduce Pre+TTN-VQC, a pre-trained TTN model combined with a VQC.
arXiv Detail & Related papers (2023-05-18T03:08:18Z) - Do Quantum Circuit Born Machines Generalize? [58.720142291102135]
We present the first work in the literature that presents the QCBM's generalization performance as an integral evaluation metric for quantum generative models.
We show that the QCBM is able to effectively learn the reweighted dataset and generate unseen samples with higher quality than those in the training set.
arXiv Detail & Related papers (2022-07-27T17:06:34Z) - DeePKS+ABACUS as a Bridge between Expensive Quantum Mechanical Models
and Machine Learning Potentials [9.982820888454958]
Deep Kohn-Sham (DeePKS) is a machine learning (ML) potential based on density functional theory (DFT)
DeePKS offers closely-matched energies and forces compared with high-level quantum mechanical (QM) method.
One can generate a decent amount of high-accuracy QM data to train a DeePKS model, and then use the DeePKS model to label a much larger amount of configurations to train a ML potential.
arXiv Detail & Related papers (2022-06-21T03:24:18Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Direct parameter estimations from machine-learning enhanced quantum
state tomography [3.459382629188014]
Machine-learning enhanced quantum state tomography (QST) has demonstrated its advantages in extracting complete information about the quantum states.
We develop a high-performance, lightweight, and easy-to-install supervised characteristic model by generating the target parameters directly.
Such a characteristic model-based ML-QST can avoid the problem of dealing with large Hilbert space, but keep feature extractions with high precision.
arXiv Detail & Related papers (2022-03-30T15:16:02Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Quantum Approximate Optimization Algorithm Based Maximum Likelihood
Detection [80.28858481461418]
Recent advances in quantum technologies pave the way for noisy intermediate-scale quantum (NISQ) devices.
Recent advances in quantum technologies pave the way for noisy intermediate-scale quantum (NISQ) devices.
arXiv Detail & Related papers (2021-07-11T10:56:24Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Improving Semiconductor Device Modeling for Electronic Design Automation
by Machine Learning Techniques [6.170514965470266]
We propose a self-augmentation strategy for improving ML-based device modeling using variational autoencoder-based techniques.
To demonstrate the effectiveness of our approach, we apply it to a deep neural network-based prediction task for the Ohmic resistance value in Gallium Nitride devices.
arXiv Detail & Related papers (2021-05-25T00:52:44Z) - Quaternion Factorization Machines: A Lightweight Solution to Intricate
Feature Interaction Modelling [76.89779231460193]
factorization machine (FM) is capable of automatically learning high-order interactions among features to make predictions without the need for manual feature engineering.
We propose the quaternion factorization machine (QFM) and quaternion neural factorization machine (QNFM) for sparse predictive analytics.
arXiv Detail & Related papers (2021-04-05T00:02:36Z) - Predicting toxicity by quantum machine learning [11.696069523681178]
We develop QML models for predicting the toxicity of 221 phenols on the basis of quantitative structure activity relationship.
Results suggest that our data encoding enhanced by quantum entanglement provided more expressive power than the previous ones.
arXiv Detail & Related papers (2020-08-18T02:59:40Z) - Automated discovery of a robust interatomic potential for aluminum [4.6028828826414925]
Machine learning (ML) based potentials aim for faithful emulation of quantum mechanics (QM) calculations at drastically reduced computational cost.
We present a highly automated approach to dataset construction using the principles of active learning (AL)
We demonstrate this approach by building an ML potential for aluminum (ANI-Al)
To demonstrate transferability, we perform a 1.3M atom shock simulation, and show that ANI-Al predictions agree very well with DFT calculations on local atomic environments sampled from the nonequilibrium dynamics.
arXiv Detail & Related papers (2020-03-10T19:06:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.