Seeing is Believing: A Federated Learning Based Prototype to Detect Wireless Injection Attacks
- URL: http://arxiv.org/abs/2311.06564v1
- Date: Sat, 11 Nov 2023 13:21:24 GMT
- Title: Seeing is Believing: A Federated Learning Based Prototype to Detect Wireless Injection Attacks
- Authors: Aadil Hussain, Nitheesh Gundapu, Sarang Drugkar, Suraj Kiran, J. Harshan, Ranjitha Prasad,
- Abstract summary: Reactive injection attacks are a class of security threats in wireless networks.
We implement secret-key based physical-layer signalling methods at the clients.
We show that robust ML models can be designed at the base-stations.
- Score: 1.8142288667655782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reactive injection attacks are a class of security threats in wireless networks wherein adversaries opportunistically inject spoofing packets in the frequency band of a client thereby forcing the base-station to deploy impersonation-detection methods. Towards circumventing such threats, we implement secret-key based physical-layer signalling methods at the clients which allow the base-stations to deploy machine learning (ML) models on their in-phase and quadrature samples at the baseband for attack detection. Using Adalm Pluto based software defined radios to implement the secret-key based signalling methods, we show that robust ML models can be designed at the base-stations. However, we also point out that, in practice, insufficient availability of training datasets at the base-stations can make these methods ineffective. Thus, we use a federated learning framework in the backhaul network, wherein a group of base-stations that need to protect their clients against reactive injection threats collaborate to refine their ML models by ensuring privacy on their datasets. Using a network of XBee devices to implement the backhaul network, experimental results on our federated learning setup shows significant enhancements in the detection accuracy, thus presenting wireless security as an excellent use-case for federated learning in 6G networks and beyond.
Related papers
- False Data Injection Attack Detection in Edge-based Smart Metering Networks with Federated Learning [1.2026018242953707]
This paper proposes a new privacy-preserved false data injection (FDI) attack detection by developing an efficient federated learning framework.
Distributed edge servers located at the network edge run an ML-based FDI attack detection model and share the trained model with the grid operator.
arXiv Detail & Related papers (2024-11-02T17:23:08Z) - Federated Learning for Zero-Day Attack Detection in 5G and Beyond V2X Networks [9.86830550255822]
Connected and Automated Vehicles (CAVs) on top of 5G and Beyond networks (5GB) make them vulnerable to increasing vectors of security and privacy attacks.
We propose in this paper a novel detection mechanism that leverages the ability of the deep auto-encoder method to detect attacks relying only on the benign network traffic pattern.
Using federated learning, the proposed intrusion detection system can be trained with large and diverse benign network traffic, while preserving the CAVs privacy, and minimizing the communication overhead.
arXiv Detail & Related papers (2024-07-03T12:42:31Z) - Physical-Layer Semantic-Aware Network for Zero-Shot Wireless Sensing [74.12670841657038]
Device-free wireless sensing has recently attracted significant interest due to its potential to support a wide range of immersive human-machine interactive applications.
Data heterogeneity in wireless signals and data privacy regulation of distributed sensing have been considered as the major challenges that hinder the wide applications of wireless sensing in large area networking systems.
We propose a novel zero-shot wireless sensing solution that allows models constructed in one or a limited number of locations to be directly transferred to other locations without any labeled data.
arXiv Detail & Related papers (2023-12-08T13:50:30Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Federated Learning Based Distributed Localization of False Data
Injection Attacks on Smart Grids [5.705281336771011]
False data injection attack (FDIA) is one of the classes of attacks that target the smart measurement devices by injecting malicious data.
We propose a federated learning-based scheme combined with a hybrid deep neural network architecture.
We validate the proposed architecture by extensive simulations on the IEEE 57, 118, and 300 bus systems and real electricity load data.
arXiv Detail & Related papers (2023-06-17T20:29:55Z) - An anomaly detection approach for backdoored neural networks: face
recognition as a case study [77.92020418343022]
We propose a novel backdoored network detection method based on the principle of anomaly detection.
We test our method on a novel dataset of backdoored networks and report detectability results with perfect scores.
arXiv Detail & Related papers (2022-08-22T12:14:13Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - Safe RAN control: A Symbolic Reinforcement Learning Approach [62.997667081978825]
We present a Symbolic Reinforcement Learning (SRL) based architecture for safety control of Radio Access Network (RAN) applications.
We provide a purely automated procedure in which a user can specify high-level logical safety specifications for a given cellular network topology.
We introduce a user interface (UI) developed to help a user set intent specifications to the system, and inspect the difference in agent proposed actions.
arXiv Detail & Related papers (2021-06-03T16:45:40Z) - Active Fuzzing for Testing and Securing Cyber-Physical Systems [8.228859318969082]
We propose active fuzzing, an automatic approach for finding test suites of packet-level CPS network attacks.
Key to our solution is the use of online active learning, which iteratively updates the models by sampling payloads.
We evaluate the efficacy of active fuzzing by implementing it for a water purification plant testbed, finding it can automatically discover a test suite of flow, pressure, and over/underflow attacks.
arXiv Detail & Related papers (2020-05-28T16:19:50Z) - Multi-stage Jamming Attacks Detection using Deep Learning Combined with
Kernelized Support Vector Machine in 5G Cloud Radio Access Networks [17.2528983535773]
This research focuses on deploying a multi-stage machine learning-based intrusion detection (ML-IDS) in 5G C-RAN.
It can detect and classify four types of jamming attacks: constant jamming, random jamming, jamming, and reactive jamming.
The final classification accuracy of attacks is 94.51% with a 7.84% false negative rate.
arXiv Detail & Related papers (2020-04-13T17:21:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.