Predicting Machining Stability with a Quantum Regression Model
- URL: http://arxiv.org/abs/2412.04048v1
- Date: Thu, 05 Dec 2024 10:38:57 GMT
- Title: Predicting Machining Stability with a Quantum Regression Model
- Authors: Sascha Mücke, Felix Finkeldey, Nico Piatkowski, Tobias Siebrecht, Petra Wiederkehr,
- Abstract summary: We propose a novel quantum regression model by extending the Real-Part Quantum SVM.
We show that the resulting model predicts the stability limits observed in our physical setup accurately.
- Score: 1.5833466939207899
- License:
- Abstract: In this article, we propose a novel quantum regression model by extending the Real-Part Quantum SVM. We apply our model to the problem of stability limit prediction in milling processes, a key component in high-precision manufacturing. To train our model, we use a custom data set acquired by an extensive series of milling experiments using different spindle speeds, enhanced with a custom feature map. We show that the resulting model predicts the stability limits observed in our physical setup accurately, demonstrating that quantum computing is capable of deploying ML models for real-world applications.
Related papers
- Universal replication of chaotic characteristics by classical and quantum machine learning [0.0]
We show that variational quantum circuit can reproduce the long-term characteristics with higher accuracy than the long short-term memory.
Our results suggest that quantum circuit model exhibits potential advantages on mitigating over-fitting, achieving higher accuracy and stability.
arXiv Detail & Related papers (2024-05-14T10:12:47Z) - Data-freeWeight Compress and Denoise for Large Language Models [101.53420111286952]
We propose a novel approach termed Data-free Joint Rank-k Approximation for compressing the parameter matrices.
We achieve a model pruning of 80% parameters while retaining 93.43% of the original performance without any calibration data.
arXiv Detail & Related papers (2024-02-26T05:51:47Z) - RQP-SGD: Differential Private Machine Learning through Noisy SGD and
Randomized Quantization [8.04975023021212]
We present RQP-SGD, a new approach for privacy-preserving quantization to train machine learning models.
This approach combines differentially private gradient descent with randomized quantization, providing a measurable privacy guarantee.
arXiv Detail & Related papers (2024-02-09T18:34:08Z) - First-Order Phase Transition of the Schwinger Model with a Quantum Computer [0.0]
We explore the first-order phase transition in the lattice Schwinger model in the presence of a topological $theta$-term.
We show that the electric field density and particle number, observables which reveal the phase structure of the model, can be reliably obtained from the quantum hardware.
arXiv Detail & Related papers (2023-12-20T08:27:49Z) - PreQuant: A Task-agnostic Quantization Approach for Pre-trained Language
Models [52.09865918265002]
We propose a novel quantize before fine-tuning'' framework, PreQuant.
PreQuant is compatible with various quantization strategies, with outlier-aware fine-tuning incorporated to correct the induced quantization error.
We demonstrate the effectiveness of PreQuant on the GLUE benchmark using BERT, RoBERTa, and T5.
arXiv Detail & Related papers (2023-05-30T08:41:33Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - Distributional Learning of Variational AutoEncoder: Application to
Synthetic Data Generation [0.7614628596146602]
We propose a new approach that expands the model capacity without sacrificing the computational advantages of the VAE framework.
Our VAE model's decoder is composed of an infinite mixture of asymmetric Laplace distribution.
We apply the proposed model to synthetic data generation, and particularly, our model demonstrates superiority in easily adjusting the level of data privacy.
arXiv Detail & Related papers (2023-02-22T11:26:50Z) - Fermionic approach to variational quantum simulation of Kitaev spin
models [50.92854230325576]
Kitaev spin models are well known for being exactly solvable in a certain parameter regime via a mapping to free fermions.
We use classical simulations to explore a novel variational ansatz that takes advantage of this fermionic representation.
We also comment on the implications of our results for simulating non-Abelian anyons on quantum computers.
arXiv Detail & Related papers (2022-04-11T18:00:01Z) - Direct parameter estimations from machine-learning enhanced quantum
state tomography [3.459382629188014]
Machine-learning enhanced quantum state tomography (QST) has demonstrated its advantages in extracting complete information about the quantum states.
We develop a high-performance, lightweight, and easy-to-install supervised characteristic model by generating the target parameters directly.
Such a characteristic model-based ML-QST can avoid the problem of dealing with large Hilbert space, but keep feature extractions with high precision.
arXiv Detail & Related papers (2022-03-30T15:16:02Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.