Detection of Physiological Data Tampering Attacks with Quantum Machine Learning
- URL: http://arxiv.org/abs/2502.05966v1
- Date: Sun, 09 Feb 2025 17:26:41 GMT
- Title: Detection of Physiological Data Tampering Attacks with Quantum Machine Learning
- Authors: Md. Saif Hassan Onim, Himanshu Thapliyal,
- Abstract summary: This study compares the effectiveness of Quantum Machine Learning (QML) for detecting physiological data tampering.
QML models are better at identifying label-flipping attacks, achieving accuracy rates of 75%-95% depending on the data and attack severity.
However, both QML and classical models struggle to detect more sophisticated adversarial perturbation attacks.
- Score: 0.4604003661048266
- License:
- Abstract: The widespread use of cloud-based medical devices and wearable sensors has made physiological data susceptible to tampering. These attacks can compromise the reliability of healthcare systems which can be critical and life-threatening. Detection of such data tampering is of immediate need. Machine learning has been used to detect anomalies in datasets but the performance of Quantum Machine Learning (QML) is still yet to be evaluated for physiological sensor data. Thus, our study compares the effectiveness of QML for detecting physiological data tampering, focusing on two types of white-box attacks: data poisoning and adversarial perturbation. The results show that QML models are better at identifying label-flipping attacks, achieving accuracy rates of 75%-95% depending on the data and attack severity. This superior performance is due to the ability of quantum algorithms to handle complex and high-dimensional data. However, both QML and classical models struggle to detect more sophisticated adversarial perturbation attacks, which subtly alter data without changing its statistical properties. Although QML performed poorly against this attack with around 45%-65% accuracy, it still outperformed classical algorithms in some cases.
Related papers
- Adversarial Poisoning Attack on Quantum Machine Learning Models [2.348041867134616]
We introduce a quantum indiscriminate data poisoning attack, QUID.
QUID achieves up to $92%$ accuracy degradation in model performance compared to baseline models.
We also tested QUID against state-of-the-art classical defenses, with accuracy degradation still exceeding $50%$.
arXiv Detail & Related papers (2024-11-21T18:46:45Z) - Unlearnable Examples Detection via Iterative Filtering [84.59070204221366]
Deep neural networks are proven to be vulnerable to data poisoning attacks.
It is quite beneficial and challenging to detect poisoned samples from a mixed dataset.
We propose an Iterative Filtering approach for UEs identification.
arXiv Detail & Related papers (2024-08-15T13:26:13Z) - Machine Learning for ALSFRS-R Score Prediction: Making Sense of the Sensor Data [44.99833362998488]
Amyotrophic Lateral Sclerosis (ALS) is a rapidly progressive neurodegenerative disease that presents individuals with limited treatment options.
The present investigation, spearheaded by the iDPP@CLEF 2024 challenge, focuses on utilizing sensor-derived data obtained through an app.
arXiv Detail & Related papers (2024-07-10T19:17:23Z) - Can We Enhance the Quality of Mobile Crowdsensing Data Without Ground Truth? [45.875832406278214]
This article proposes a prediction- and reputation-based truth discovery framework.
It can separate low-quality data from high-quality data in sensing tasks.
It outperforms existing methods in terms of identification accuracy and data quality enhancement.
arXiv Detail & Related papers (2024-05-29T03:16:12Z) - A Review and Implementation of Object Detection Models and Optimizations for Real-time Medical Mask Detection during the COVID-19 Pandemic [0.0]
This work assesses the most fundamental object detection models on the Common Objects in Context (COCO) dataset.
We select a highly efficient model called YOLOv5 to train on the topical and unexplored dataset of human faces with medical masks.
We propose an optimized model based on YOLOv5 using transfer learning for the detection of correctly and incorrectly worn medical masks.
arXiv Detail & Related papers (2024-05-28T17:27:24Z) - The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - Machine Learning Force Fields with Data Cost Aware Training [94.78998399180519]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation.
Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels.
We propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.
arXiv Detail & Related papers (2023-06-05T04:34:54Z) - Exploring the Vulnerabilities of Machine Learning and Quantum Machine
Learning to Adversarial Attacks using a Malware Dataset: A Comparative
Analysis [0.0]
Machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems.
Their susceptibility to adversarial attacks raises concerns when deploying these systems in security sensitive applications.
We present a comparative analysis of the vulnerability of ML and QNN models to adversarial attacks using a malware dataset.
arXiv Detail & Related papers (2023-05-31T06:31:42Z) - Effect of Balancing Data Using Synthetic Data on the Performance of
Machine Learning Classifiers for Intrusion Detection in Computer Networks [3.233545237942899]
Researchers in academia and industry used machine learning (ML) techniques to design and implement Intrusion Detection Systems (IDSes) for computer networks.
In many of the datasets used in such systems, data are imbalanced (i.e., not all classes have equal amount of samples)
We show that training ML models on dataset balanced with synthetic samples generated by CTGAN increased prediction accuracy by up to $8%$.
arXiv Detail & Related papers (2022-04-01T00:25:11Z) - SOUL: An Energy-Efficient Unsupervised Online Learning Seizure Detection
Classifier [68.8204255655161]
Implantable devices that record neural activity and detect seizures have been adopted to issue warnings or trigger neurostimulation to suppress seizures.
For an implantable seizure detection system, a low power, at-the-edge, online learning algorithm can be employed to dynamically adapt to neural signal drifts.
SOUL was fabricated in TSMC's 28 nm process occupying 0.1 mm2 and achieves 1.5 nJ/classification energy efficiency, which is at least 24x more efficient than state-of-the-art.
arXiv Detail & Related papers (2021-10-01T23:01:20Z) - Attack-agnostic Adversarial Detection on Medical Data Using Explainable
Machine Learning [0.0]
We propose a model agnostic explainability-based method for the accurate detection of adversarial samples on two datasets.
On the MIMIC-III and Henan-Renmin EHR datasets, we report a detection accuracy of 77% against the Longitudinal Adrial Attack.
On the MIMIC-CXR dataset, we achieve an accuracy of 88%; significantly improving on the state of the art of adversarial detection in both datasets by over 10% in all settings.
arXiv Detail & Related papers (2021-05-05T10:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.