DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards
Secure Industrial Internet of Things Analytics
- URL: http://arxiv.org/abs/2301.09740v1
- Date: Mon, 23 Jan 2023 22:10:40 GMT
- Title: DODEM: DOuble DEfense Mechanism Against Adversarial Attacks Towards
Secure Industrial Internet of Things Analytics
- Authors: Onat Gungor, Tajana Rosing, Baris Aksanli
- Abstract summary: We propose a double defense mechanism to detect and mitigate adversarial attacks in I-IoT environments.
We first detect if there is an adversarial attack on a given sample using novelty detection algorithms.
If there is an attack, adversarial retraining provides a more robust model, while we apply standard training for regular samples.
- Score: 8.697883716452385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Industrial Internet of Things (I-IoT) is a collaboration of devices, sensors,
and networking equipment to monitor and collect data from industrial
operations. Machine learning (ML) methods use this data to make high-level
decisions with minimal human intervention. Data-driven predictive maintenance
(PDM) is a crucial ML-based I-IoT application to find an optimal maintenance
schedule for industrial assets. The performance of these ML methods can
seriously be threatened by adversarial attacks where an adversary crafts
perturbed data and sends it to the ML model to deteriorate its prediction
performance. The models should be able to stay robust against these attacks
where robustness is measured by how much perturbation in input data affects
model performance. Hence, there is a need for effective defense mechanisms that
can protect these models against adversarial attacks. In this work, we propose
a double defense mechanism to detect and mitigate adversarial attacks in I-IoT
environments. We first detect if there is an adversarial attack on a given
sample using novelty detection algorithms. Then, based on the outcome of our
algorithm, marking an instance as attack or normal, we select adversarial
retraining or standard training to provide a secondary defense layer. If there
is an attack, adversarial retraining provides a more robust model, while we
apply standard training for regular samples. Since we may not know if an attack
will take place, our adaptive mechanism allows us to consider irregular changes
in data. The results show that our double defense strategy is highly efficient
where we can improve model robustness by up to 64.6% and 52% compared to
standard and adversarial retraining, respectively.
Related papers
- Adversarial Attacks and Defenses in Multivariate Time-Series Forecasting for Smart and Connected Infrastructures [0.9217021281095907]
We investigate the impact of adversarial attacks on time-series forecasting.
We employ untargeted white-box attacks to poison the inputs to the training process, effectively misleading the model.
Having demonstrated the feasibility of these attacks, we develop robust models through adversarial training and model hardening.
arXiv Detail & Related papers (2024-08-27T08:44:31Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Machine Learning Security against Data Poisoning: Are We There Yet? [23.809841593870757]
This article reviews data poisoning attacks that compromise the training data used to learn machine learning models.
We discuss how to mitigate these attacks using basic security principles, or by deploying ML-oriented defensive mechanisms.
arXiv Detail & Related papers (2022-04-12T17:52:09Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - Defence against adversarial attacks using classical and quantum-enhanced
Boltzmann machines [64.62510681492994]
generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations.
We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset.
arXiv Detail & Related papers (2020-12-21T19:00:03Z) - Omni: Automated Ensemble with Unexpected Models against Adversarial
Evasion Attack [35.0689225703137]
A machine learning-based security detection model is susceptible to adversarial evasion attacks.
We propose an approach called Omni to explore methods that create an ensemble of "unexpected models"
In studies with five types of adversarial evasion attacks, we show Omni is a promising approach as a defense strategy.
arXiv Detail & Related papers (2020-11-23T20:02:40Z) - Improving Robustness to Model Inversion Attacks via Mutual Information
Regularization [12.079281416410227]
This paper studies defense mechanisms against model inversion (MI) attacks.
MI is a type of privacy attacks aimed at inferring information about the training data distribution given the access to a target machine learning model.
We propose the Mutual Information Regularization based Defense (MID) against MI attacks.
arXiv Detail & Related papers (2020-09-11T06:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.