Optimizing Hyperparameters for Quantum Data Re-Uploaders in Calorimetric Particle Identification
- URL: http://arxiv.org/abs/2412.12397v1
- Date: Mon, 16 Dec 2024 23:10:00 GMT
- Title: Optimizing Hyperparameters for Quantum Data Re-Uploaders in Calorimetric Particle Identification
- Authors: Léa Cassé, Bernhard Pfahringer, Albert Bifet, Frédéric Magniette,
- Abstract summary: We present an application of a single-qubit Data Re-Uploading (QRU) quantum model for particle classification in calorimetric experiments.
This model requires minimal qubits while delivering strong classification performance.
- Score: 11.099632666738177
- License:
- Abstract: We present an application of a single-qubit Data Re-Uploading (QRU) quantum model for particle classification in calorimetric experiments. Optimized for Noisy Intermediate-Scale Quantum (NISQ) devices, this model requires minimal qubits while delivering strong classification performance. Evaluated on a novel simulated dataset specific to particle physics, the QRU model achieves high accuracy in classifying particle types. Through a systematic exploration of model hyperparameters -- such as circuit depth, rotation gates, input normalization and the number of trainable parameters per input -- and training parameters like batch size, optimizer, loss function and learning rate, we assess their individual impacts on model accuracy and efficiency. Additionally, we apply global optimization methods, uncovering hyperparameter correlations that further enhance performance. Our results indicate that the QRU model attains significant accuracy with efficient computational costs, underscoring its potential for practical quantum machine learning applications.
Related papers
- Efficient Hyperparameter Importance Assessment for CNNs [1.7778609937758323]
This paper aims to quantify the importance weights of some hyperparameters in Convolutional Neural Networks (CNNs) with an algorithm called N-RReliefF.
We conduct an extensive study by training over ten thousand CNN models across ten popular image classification datasets.
arXiv Detail & Related papers (2024-10-11T15:47:46Z) - Parameter Estimation in Quantum Metrology Technique for Time Series Prediction [0.0]
The paper investigates the techniques of quantum computation in metrological predictions.
It focuses on enhancing prediction potential through variational parameter estimation.
The impacts of various parameter distributions and learning rates on predictive accuracy are investigated.
arXiv Detail & Related papers (2024-06-12T05:55:45Z) - PikeLPN: Mitigating Overlooked Inefficiencies of Low-Precision Neural Networks [4.827161693957252]
Non-quantized elementwise operations dominate the inference cost of low-precision models.
PikeLPN model addresses these issues by applying quantization to both elementwise operations and multiply-accumulate operations.
arXiv Detail & Related papers (2024-03-29T18:23:34Z) - On Optimizing Hyperparameters for Quantum Neural Networks [0.5999777817331317]
Current state-of-the-art Machine Learning models require weeks for training, which is associated with an enormous $CO$ footprint.
Quantum Computing, and specifically Quantum Machine Learning (QML), can offer significant theoretical speed-ups and enhanced power.
arXiv Detail & Related papers (2024-03-27T13:59:09Z) - On-Chip Hardware-Aware Quantization for Mixed Precision Neural Networks [52.97107229149988]
We propose an On-Chip Hardware-Aware Quantization framework, performing hardware-aware mixed-precision quantization on deployed edge devices.
For efficiency metrics, we built an On-Chip Quantization Aware pipeline, which allows the quantization process to perceive the actual hardware efficiency of the quantization operator.
For accuracy metrics, we propose Mask-Guided Quantization Estimation technology to effectively estimate the accuracy impact of operators in the on-chip scenario.
arXiv Detail & Related papers (2023-09-05T04:39:34Z) - Direct parameter estimations from machine-learning enhanced quantum
state tomography [3.459382629188014]
Machine-learning enhanced quantum state tomography (QST) has demonstrated its advantages in extracting complete information about the quantum states.
We develop a high-performance, lightweight, and easy-to-install supervised characteristic model by generating the target parameters directly.
Such a characteristic model-based ML-QST can avoid the problem of dealing with large Hilbert space, but keep feature extractions with high precision.
arXiv Detail & Related papers (2022-03-30T15:16:02Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Physics-informed CoKriging model of a redox flow battery [68.8204255655161]
Redox flow batteries (RFBs) offer the capability to store large amounts of energy cheaply and efficiently.
There is a need for fast and accurate models of the charge-discharge curve of a RFB to potentially improve the battery capacity and performance.
We develop a multifidelity model for predicting the charge-discharge curve of a RFB.
arXiv Detail & Related papers (2021-06-17T00:49:55Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Highly Efficient Salient Object Detection with 100K Parameters [137.74898755102387]
We propose a flexible convolutional module, namely generalized OctConv (gOctConv), to efficiently utilize both in-stage and cross-stages multi-scale features.
We build an extremely light-weighted model, namely CSNet, which achieves comparable performance with about 0.2% (100k) of large models on popular object detection benchmarks.
arXiv Detail & Related papers (2020-03-12T07:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.