FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based
IIoT Applications
- URL: http://arxiv.org/abs/2006.15632v1
- Date: Sun, 28 Jun 2020 15:17:15 GMT
- Title: FDA3 : Federated Defense Against Adversarial Attacks for Cloud-Based
IIoT Applications
- Authors: Yunfei Song, Tian Liu, Tongquan Wei, Xiangfeng Wang, Zhe Tao, Mingsong
Chen
- Abstract summary: adversarial attacks are increasingly emerging to fool Deep Neural Networks (DNNs) used by Industrial IoT (IIoT) applications.
We present an effective federated defense approach named FDA3 that can aggregate defense knowledge against adversarial examples from different sources.
Our proposed cloud-based architecture enables the sharing of defense capabilities against different attacks among IIoT devices.
- Score: 11.178342219720298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Along with the proliferation of Artificial Intelligence (AI) and Internet of
Things (IoT) techniques, various kinds of adversarial attacks are increasingly
emerging to fool Deep Neural Networks (DNNs) used by Industrial IoT (IIoT)
applications. Due to biased training data or vulnerable underlying models,
imperceptible modifications on inputs made by adversarial attacks may result in
devastating consequences. Although existing methods are promising in defending
such malicious attacks, most of them can only deal with limited existing attack
types, which makes the deployment of large-scale IIoT devices a great
challenge. To address this problem, we present an effective federated defense
approach named FDA3 that can aggregate defense knowledge against adversarial
examples from different sources. Inspired by federated learning, our proposed
cloud-based architecture enables the sharing of defense capabilities against
different attacks among IIoT devices. Comprehensive experimental results show
that the generated DNNs by our approach can not only resist more malicious
attacks than existing attack-specific adversarial training methods, but also
can prevent IIoT applications from new attacks.
Related papers
- Defense Against Prompt Injection Attack by Leveraging Attack Techniques [66.65466992544728]
Large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks.
As LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise.
Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content.
arXiv Detail & Related papers (2024-11-01T09:14:21Z) - A Novel Approach to Guard from Adversarial Attacks using Stable Diffusion [0.0]
Our proposal suggests a different approach to the AI Guardian framework.
Instead of including adversarial examples in the training process, we propose training the AI system without them.
This aims to create a system that is inherently resilient to a wider range of attacks.
arXiv Detail & Related papers (2024-05-03T04:08:15Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning [1.6574413179773757]
adversarial attacks aim to trick ML models into producing faulty predictions.
adversarial attacks can compromise ML-based NIDSs.
Our experiments indicate that continuous re-training, even without adversarial training, can reduce the effectiveness of adversarial attacks.
arXiv Detail & Related papers (2023-06-08T18:32:08Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - IDEA: Invariant Defense for Graph Adversarial Robustness [60.0126873387533]
We propose an Invariant causal DEfense method against adversarial Attacks (IDEA)
We derive node-based and structure-based invariance objectives from an information-theoretic perspective.
Experiments demonstrate that IDEA attains state-of-the-art defense performance under all five attacks on all five datasets.
arXiv Detail & Related papers (2023-05-25T07:16:00Z) - Physical Adversarial Attacks on Deep Neural Networks for Traffic Sign
Recognition: A Feasibility Study [0.0]
We apply different black-box attack methods to generate perturbations that are applied in the physical environment and can be used to fool systems under different environmental conditions.
We show that reliable physical adversarial attacks can be performed with different methods and that it is also possible to reduce the perceptibility of the resulting perturbations.
arXiv Detail & Related papers (2023-02-27T08:10:58Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Mitigating Advanced Adversarial Attacks with More Advanced Gradient
Obfuscation Techniques [13.972753012322126]
Deep Neural Networks (DNNs) are well-known to be vulnerable to Adversarial Examples (AEs)
Recently, advanced gradient-based attack techniques were proposed.
In this paper, we make a steady step towards mitigating those advanced gradient-based attacks.
arXiv Detail & Related papers (2020-05-27T23:42:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.