Q-SCALE: Quantum computing-based Sensor Calibration for Advanced Learning and Efficiency
- URL: http://arxiv.org/abs/2410.02998v1
- Date: Thu, 3 Oct 2024 21:15:05 GMT
- Title: Q-SCALE: Quantum computing-based Sensor Calibration for Advanced Learning and Efficiency
- Authors: Lorenzo Bergadano, Andrea Ceschini, Pietro Chiavassa, Edoardo Giusto, Bartolomeo Montrucchio, Massimo Panella, Antonello Rosato,
- Abstract summary: This article investigates the process of calibrating inexpensive optical fine-dust sensors through advanced methodologies such as Deep Learning (DL) and Quantum Machine Learning (QML)
The objective of the project is to compare four sophisticated algorithms from both the classical and quantum realms to discern their disparities and explore possible alternative approaches to improve the precision and dependability of particulate matter measurements in urban air quality surveillance.
- Score: 1.2564343689544841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a world burdened by air pollution, the integration of state-of-the-art sensor calibration techniques utilizing Quantum Computing (QC) and Machine Learning (ML) holds promise for enhancing the accuracy and efficiency of air quality monitoring systems in smart cities. This article investigates the process of calibrating inexpensive optical fine-dust sensors through advanced methodologies such as Deep Learning (DL) and Quantum Machine Learning (QML). The objective of the project is to compare four sophisticated algorithms from both the classical and quantum realms to discern their disparities and explore possible alternative approaches to improve the precision and dependability of particulate matter measurements in urban air quality surveillance. Classical Feed-Forward Neural Networks (FFNN) and Long Short-Term Memory (LSTM) models are evaluated against their quantum counterparts: Variational Quantum Regressors (VQR) and Quantum LSTM (QLSTM) circuits. Through meticulous testing, including hyperparameter optimization and cross-validation, the study assesses the potential of quantum models to refine calibration performance. Our analysis shows that: the FFNN model achieved superior calibration accuracy on the test set compared to the VQR model in terms of lower L1 loss function (2.92 vs 4.81); the QLSTM slightly outperformed the LSTM model (loss on the test set: 2.70 vs 2.77), despite using fewer trainable weights (66 vs 482).
Related papers
- Quantum and Hybrid Machine-Learning Models for Materials-Science Tasks [0.0]
We design and estimate quantum machine learning and hybrid quantum-classical models.<n>We predict stacking fault energies and solutes that can ductilize magnesium.
arXiv Detail & Related papers (2025-07-10T20:29:16Z) - Toward Practical Quantum Machine Learning: A Novel Hybrid Quantum LSTM for Fraud Detection [0.1398098625978622]
We present a novel hybrid quantum-classical neural network architecture for fraud detection.
By leveraging quantum phenomena such as superposition and entanglement, our model enhances the feature representation of sequential transaction data.
Results demonstrate competitive improvements in accuracy, precision, recall, and F1 score relative to a conventional LSTM baseline.
arXiv Detail & Related papers (2025-04-30T19:09:12Z) - Evaluating Effects of Augmented SELFIES for Molecular Understanding Using QK-LSTM [2.348041867134616]
Identifying molecular properties, including side effects, is a critical yet time-consuming step in drug development.
Recent advancements have been made in the classical domain using augmented variations of the Simplified Molecular Line-Entry System (SMILES)
This study presents the first analysis of these approaches, providing novel insights into their potential for enhancing molecular property prediction and side effect identification.
arXiv Detail & Related papers (2025-04-29T14:03:31Z) - Learning to Measure Quantum Neural Networks [10.617463958884528]
We introduce a novel approach that makes the observable of the quantum system-specifically, the Hermitian matrix-learnable.
Our method features an end-to-end differentiable learning framework, where the parameterized observable is trained alongside the ordinary quantum circuit parameters.
Using numerical simulations, we show that the proposed method can identify observables for variational quantum circuits that lead to improved outcomes.
arXiv Detail & Related papers (2025-01-10T02:28:19Z) - Leveraging Pre-Trained Neural Networks to Enhance Machine Learning with Variational Quantum Circuits [48.33631905972908]
We introduce an innovative approach that utilizes pre-trained neural networks to enhance Variational Quantum Circuits (VQC)
This technique effectively separates approximation error from qubit count and removes the need for restrictive conditions.
Our results extend to applications such as human genome analysis, demonstrating the broad applicability of our approach.
arXiv Detail & Related papers (2024-11-13T12:03:39Z) - A Quantum Circuit-Based Compression Perspective for Parameter-Efficient Learning [19.178352290785153]
We introduce Quantum s Adaptation (QPA) in the framework of quantum parameter generation.
QPA integrates QNNs with a classical multi-layer perceptron mapping model to generate parameters for fine-tuning methods.
Using Gemma-2 and GPT-2 as case studies, QPA demonstrates significant parameter reduction for parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-10-13T14:09:29Z) - Quantum Long Short-Term Memory for Drug Discovery [15.186004892998382]
We present Quantum Long Short-Term Memory (QLSTM), a QML architecture, and demonstrate its effectiveness in drug discovery.<n>We observe consistent performance gains over classical LSTM, with ROC-AUC improvements ranging from 3% to over 6%.<n>QLSTM exhibits improved predictive accuracy as the number of qubits increases, and faster convergence than classical LSTM.
arXiv Detail & Related papers (2024-07-29T10:10:03Z) - EfficientQAT: Efficient Quantization-Aware Training for Large Language Models [50.525259103219256]
quantization-aware training (QAT) offers a solution by reducing memory consumption through low-bit representations with minimal accuracy loss.
We propose Efficient Quantization-Aware Training (EfficientQAT), a more feasible QAT algorithm.
EfficientQAT involves two consecutive phases: Block-wise training of all parameters (Block-AP) and end-to-end training of quantization parameters (E2E-QP)
arXiv Detail & Related papers (2024-07-10T17:53:30Z) - A Quantum Neural Network-Based Approach to Power Quality Disturbances Detection and Recognition [15.789631792979366]
Power quality disturbances (PQDs) significantly impact the stability and reliability of power systems.
This paper proposes an improved quantum neural networks (QNN) model for PQDs detection and recognition.
The model achieves accuracies of 99.75%, 97.85% and 95.5% in experiments involving the detection of disturbances, recognition of seven single disturbances, and recognition of ten mixed disturbances.
arXiv Detail & Related papers (2024-06-05T09:10:11Z) - SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models [67.67135738642547]
Post-training quantization (PTQ) is a powerful compression technique investigated in large language models (LLMs)
Existing PTQ methods are not ideal in terms of accuracy and efficiency, especially with below 4 bit-widths.
This paper presents a Salience-Driven Mixed-Precision Quantization scheme for LLMs, namely SliM-LLM.
arXiv Detail & Related papers (2024-05-23T16:21:48Z) - Optical Quantum Sensing for Agnostic Environments via Deep Learning [59.088205627308]
We introduce an innovative Deep Learning-based Quantum Sensing scheme.
It enables optical quantum sensors to attain Heisenberg limit (HL) in agnostic environments.
Our findings offer a new lens through which to accelerate optical quantum sensing tasks.
arXiv Detail & Related papers (2023-11-13T09:46:05Z) - QKSAN: A Quantum Kernel Self-Attention Network [53.96779043113156]
A Quantum Kernel Self-Attention Mechanism (QKSAM) is introduced to combine the data representation merit of Quantum Kernel Methods (QKM) with the efficient information extraction capability of SAM.
A Quantum Kernel Self-Attention Network (QKSAN) framework is proposed based on QKSAM, which ingeniously incorporates the Deferred Measurement Principle (DMP) and conditional measurement techniques.
Four QKSAN sub-models are deployed on PennyLane and IBM Qiskit platforms to perform binary classification on MNIST and Fashion MNIST.
arXiv Detail & Related papers (2023-08-25T15:08:19Z) - A Novel Spatial-Temporal Variational Quantum Circuit to Enable Deep
Learning on NISQ Devices [12.873184000122542]
This paper proposes a novel spatial-temporal design, namely ST-VQC, to integrate non-linearity in quantum learning.
ST-VQC can achieve over 30% accuracy improvement compared with existing VQCs on actual quantum computers.
arXiv Detail & Related papers (2023-07-19T06:17:16Z) - Weight Re-Mapping for Variational Quantum Algorithms [54.854986762287126]
We introduce the concept of weight re-mapping for variational quantum circuits (VQCs)
We employ seven distinct weight re-mapping functions to assess their impact on eight classification datasets.
Our results indicate that weight re-mapping can enhance the convergence speed of the VQC.
arXiv Detail & Related papers (2023-06-09T09:42:21Z) - Quantum machine learning for image classification [39.58317527488534]
This research introduces two quantum machine learning models that leverage the principles of quantum mechanics for effective computations.
Our first model, a hybrid quantum neural network with parallel quantum circuits, enables the execution of computations even in the noisy intermediate-scale quantum era.
A second model introduces a hybrid quantum neural network with a Quanvolutional layer, reducing image resolution via a convolution process.
arXiv Detail & Related papers (2023-04-18T18:23:20Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Learnable Companding Quantization for Accurate Low-bit Neural Networks [3.655021726150368]
Quantizing deep neural networks is an effective method for reducing memory consumption and improving inference speed.
It is still hard for extremely low-bit models to achieve accuracy comparable with that of full-precision models.
We propose learnable companding quantization (LCQ) as a novel non-uniform quantization method for 2-, 3-, and 4-bit models.
arXiv Detail & Related papers (2021-03-12T09:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.