Quantum-Classical Hybrid Framework for Zero-Day Time-Push GNSS Spoofing Detection
- URL: http://arxiv.org/abs/2508.18085v1
- Date: Mon, 25 Aug 2025 14:46:22 GMT
- Title: Quantum-Classical Hybrid Framework for Zero-Day Time-Push GNSS Spoofing Detection
- Authors: Abyad Enan, Mashrur Chowdhury, Sagar Dasgupta, Mizanur Rahman,
- Abstract summary: We develop a zero-day spoofing detection method using a Hybrid Quantum-Classical Autoencoder (HQC-AE)<n>We focus on spoofing detection in static receivers, where attackers manipulate timing information to induce incorrect time computations at the receiver.<n>Our analysis demonstrates that the HQC-AE consistently outperforms its classical counterpart, traditional supervised learning-based models, and existing unsupervised learning-based methods.
- Score: 8.560939383123657
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Global Navigation Satellite Systems (GNSS) are critical for Positioning, Navigation, and Timing (PNT) applications. However, GNSS are highly vulnerable to spoofing attacks, where adversaries transmit counterfeit signals to mislead receivers. Such attacks can lead to severe consequences, including misdirected navigation, compromised data integrity, and operational disruptions. Most existing spoofing detection methods depend on supervised learning techniques and struggle to detect novel, evolved, and unseen attacks. To overcome this limitation, we develop a zero-day spoofing detection method using a Hybrid Quantum-Classical Autoencoder (HQC-AE), trained solely on authentic GNSS signals without exposure to spoofed data. By leveraging features extracted during the tracking stage, our method enables proactive detection before PNT solutions are computed. We focus on spoofing detection in static GNSS receivers, which are particularly susceptible to time-push spoofing attacks, where attackers manipulate timing information to induce incorrect time computations at the receiver. We evaluate our model against different unseen time-push spoofing attack scenarios: simplistic, intermediate, and sophisticated. Our analysis demonstrates that the HQC-AE consistently outperforms its classical counterpart, traditional supervised learning-based models, and existing unsupervised learning-based methods in detecting zero-day, unseen GNSS time-push spoofing attacks, achieving an average detection accuracy of 97.71% with an average false negative rate of 0.62% (when an attack occurs but is not detected). For sophisticated spoofing attacks, the HQC-AE attains an accuracy of 98.23% with a false negative rate of 1.85%. These findings highlight the effectiveness of our method in proactively detecting zero-day GNSS time-push spoofing attacks across various stationary GNSS receiver platforms.
Related papers
- GPS Spoofing Attack Detection in Autonomous Vehicles Using Adaptive DBSCAN [1.932372263677091]
This study presents an adaptive detection approach utilizing a dynamically tuned Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm.<n>The modified algorithm effectively identifies turn-by-turn, stop, overshoot, and multiple small biased spoofing attacks.
arXiv Detail & Related papers (2025-10-12T19:06:44Z) - Real-Time Bayesian Detection of Drift-Evasive GNSS Spoofing in Reinforcement Learning Based UAV Deconfliction [6.956559003734227]
Autonomous unmanned aerial vehicles (UAVs) rely on global navigation satellite system (GNSS) pseudorange measurements for accurate real-time localization and navigation.<n>This dependence exposes them to sophisticated spoofing threats, where adversaries manipulate pseudoranges to deceive UAV receivers.<n>Traditional distributional shift detection techniques often require accumulating a threshold number of samples, causing delays that impede rapid detection and timely response.<n>This study explores a Bayesian online change point detection (BOCPD) approach that monitors temporal shifts in value estimates from a reinforcement learning (RL) critic network to detect subtle behavioural deviations in UAV navigation.
arXiv Detail & Related papers (2025-07-15T10:27:27Z) - DataSentinel: A Game-Theoretic Detection of Prompt Injection Attacks [101.52204404377039]
LLM-integrated applications and agents are vulnerable to prompt injection attacks.<n>A detection method aims to determine whether a given input is contaminated by an injected prompt.<n>We propose DataSentinel, a game-theoretic method to detect prompt injection attacks.
arXiv Detail & Related papers (2025-04-15T16:26:21Z) - Time-based GNSS attack detection [0.0]
Cross-checking the provided time against alternative trusted time sources can lead to attack detection aiming at controlling the receiver time.<n>We implement adversaries spanning from simplistic spoofers to advanced ones synchronized with the constellation.<n>The method is largely agnostic to the satellite constellation and the attacker type, making time-based data validation of information compatible with existing receivers and readily deployable.
arXiv Detail & Related papers (2025-02-06T08:28:41Z) - Experimental Validation of Sensor Fusion-based GNSS Spoofing Attack
Detection Framework for Autonomous Vehicles [5.624009710240032]
We present a sensor fusion-based spoofing attack detection framework for Autonomous Vehicles.
Experiments are conducted in Tuscaloosa, AL, mimicking urban road structures.
Results demonstrate the framework's ability to detect various sophisticated spoofing attacks, even including slow drifting attacks.
arXiv Detail & Related papers (2024-01-02T17:30:46Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - A Reinforcement Learning Approach for GNSS Spoofing Attack Detection of
Autonomous Vehicles [3.918774449495583]
This paper develops a deep reinforcement learning (RL)-based turn-by-turn spoofing attack detection using low-cost in-vehicle sensor data.
We find that the accuracy of the RL model ranges from 99.99% to 100%, and the recall value is 100%.
Overall, the analyses reveal that the RL model is effective in turn-by-turn spoofing attack detection.
arXiv Detail & Related papers (2021-08-19T11:48:27Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.