Spoofing Attack Detection in the Physical Layer with Robustness to User
Movement
- URL: http://arxiv.org/abs/2310.11043v1
- Date: Tue, 17 Oct 2023 07:18:03 GMT
- Title: Spoofing Attack Detection in the Physical Layer with Robustness to User
Movement
- Authors: Daniel Romero, Tien Ngoc Ha, and Peter Gerstoft
- Abstract summary: In a spoofing attack, an attacker impersonates a legitimate user to access or modify data belonging to the latter.
This paper proposes a scheme that combines the decisions of a position-change detector based on a deep neural network to distinguish spoofing from movement.
- Score: 20.705184880085557
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a spoofing attack, an attacker impersonates a legitimate user to access or
modify data belonging to the latter. Typical approaches for spoofing detection
in the physical layer declare an attack when a change is observed in certain
channel features, such as the received signal strength (RSS) measured by
spatially distributed receivers. However, since channels change over time, for
example due to user movement, such approaches are impractical. To sidestep this
limitation, this paper proposes a scheme that combines the decisions of a
position-change detector based on a deep neural network to distinguish spoofing
from movement. Building upon community detection on graphs, the sequence of
received frames is partitioned into subsequences to detect concurrent
transmissions from distinct locations. The scheme can be easily deployed in
practice since it just involves collecting a small dataset of measurements at a
few tens of locations that need not even be computed or recorded. The scheme is
evaluated on real data collected for this purpose.
Related papers
- Real-Time Bayesian Detection of Drift-Evasive GNSS Spoofing in Reinforcement Learning Based UAV Deconfliction [6.956559003734227]
Autonomous unmanned aerial vehicles (UAVs) rely on global navigation satellite system (GNSS) pseudorange measurements for accurate real-time localization and navigation.<n>This dependence exposes them to sophisticated spoofing threats, where adversaries manipulate pseudoranges to deceive UAV receivers.<n>Traditional distributional shift detection techniques often require accumulating a threshold number of samples, causing delays that impede rapid detection and timely response.<n>This study explores a Bayesian online change point detection (BOCPD) approach that monitors temporal shifts in value estimates from a reinforcement learning (RL) critic network to detect subtle behavioural deviations in UAV navigation.
arXiv Detail & Related papers (2025-07-15T10:27:27Z) - Detecting Backdoor Attacks via Similarity in Semantic Communication Systems [3.565151496245487]
This work proposes a defense mechanism that leverages semantic similarity to detect backdoor attacks.
By analyzing deviations in semantic feature space and establishing a threshold-based detection framework, the proposed approach effectively identifies poisoned samples.
arXiv Detail & Related papers (2025-02-06T02:22:36Z) - Experimental Validation of Sensor Fusion-based GNSS Spoofing Attack
Detection Framework for Autonomous Vehicles [5.624009710240032]
We present a sensor fusion-based spoofing attack detection framework for Autonomous Vehicles.
Experiments are conducted in Tuscaloosa, AL, mimicking urban road structures.
Results demonstrate the framework's ability to detect various sophisticated spoofing attacks, even including slow drifting attacks.
arXiv Detail & Related papers (2024-01-02T17:30:46Z) - Histogram Layer Time Delay Neural Networks for Passive Sonar
Classification [58.720142291102135]
A novel method combines a time delay neural network and histogram layer to incorporate statistical contexts for improved feature learning and underwater acoustic target classification.
The proposed method outperforms the baseline model, demonstrating the utility in incorporating statistical contexts for passive sonar target recognition.
arXiv Detail & Related papers (2023-07-25T19:47:26Z) - Spoofing Attack Detection in the Physical Layer with Commutative Neural
Networks [21.6399273864521]
In a spoofing attack, an attacker impersonates a legitimate user to access or tamper with data intended for or produced by the legitimate user.
Existing schemes rely on long-term estimates, which makes it difficult to distinguish spoofing from movement of a legitimate user.
This limitation is here addressed by means of a deep neural network that implicitly learns the distribution of pairs of short-term RSS vector estimates.
arXiv Detail & Related papers (2022-11-08T14:20:58Z) - Context-Preserving Instance-Level Augmentation and Deformable
Convolution Networks for SAR Ship Detection [50.53262868498824]
Shape deformation of targets in SAR image due to random orientation and partial information loss is an essential challenge in SAR ship detection.
We propose a data augmentation method to train a deep network that is robust to partial information loss within the targets.
arXiv Detail & Related papers (2022-02-14T07:01:01Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Task-Sensitive Concept Drift Detector with Metric Learning [7.706795195017394]
We propose a novel task-sensitive drift detection framework, which is able to detect drifts without access to true labels during inference.
It is able to detect real drift, where the drift affects the classification performance, while it properly ignores virtual drift.
We evaluate the performance of the proposed framework with a novel metric, which accumulates the standard metrics of detection accuracy, false positive rate and detection delay into one value.
arXiv Detail & Related papers (2021-08-16T09:10:52Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - DNS Covert Channel Detection via Behavioral Analysis: a Machine Learning
Approach [0.09176056742068815]
We propose an effective covert channel detection method based on the analysis of DNS network data passively extracted from a network monitoring system.
The proposed solution has been evaluated over a 15-day-long experimental session with the injection of traffic that covers the most relevant exfiltration and tunneling attacks.
arXiv Detail & Related papers (2020-10-04T13:28:28Z) - Unsupervised Domain Adaptation for Acoustic Scene Classification Using
Band-Wise Statistics Matching [69.24460241328521]
Machine learning algorithms can be negatively affected by mismatches between training (source) and test (target) data distributions.
We propose an unsupervised domain adaptation method that consists of aligning the first- and second-order sample statistics of each frequency band of target-domain acoustic scenes to the ones of the source-domain training dataset.
We show that the proposed method outperforms the state-of-the-art unsupervised methods found in the literature in terms of both source- and target-domain classification accuracy.
arXiv Detail & Related papers (2020-04-30T23:56:05Z) - Non-Intrusive Detection of Adversarial Deep Learning Attacks via
Observer Networks [5.4572790062292125]
Recent studies have shown that deep learning models are vulnerable to crafted adversarial inputs.
We propose a novel method to detect adversarial inputs by augmenting the main classification network with multiple binary detectors.
We achieve a 99.5% detection accuracy on the MNIST dataset and 97.5% on the CIFAR-10 dataset.
arXiv Detail & Related papers (2020-02-22T21:13:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.