A White-Box Adversarial Attack Against a Digital Twin
- URL: http://arxiv.org/abs/2210.14018v1
- Date: Tue, 25 Oct 2022 13:41:02 GMT
- Title: A White-Box Adversarial Attack Against a Digital Twin
- Authors: Wilson Patterson, Ivan Fernandez, Subash Neupane, Milan Parmar, Sudip
Mittal, Shahram Rahimi
- Abstract summary: This paper explores the susceptibility of Digital Twin (DT) to adversarial attacks.
We first formulate a DT of a vehicular system using a deep neural network architecture and then utilize it to launch an adversarial attack.
We attack the DT model by perturbing the input to the trained model and show how easily the model can be broken with white-box attacks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent research has shown that Machine Learning/Deep Learning (ML/DL) models
are particularly vulnerable to adversarial perturbations, which are small
changes made to the input data in order to fool a machine learning classifier.
The Digital Twin, which is typically described as consisting of a physical
entity, a virtual counterpart, and the data connections in between, is
increasingly being investigated as a means of improving the performance of
physical entities by leveraging computational techniques, which are enabled by
the virtual counterpart. This paper explores the susceptibility of Digital Twin
(DT), a virtual model designed to accurately reflect a physical object using
ML/DL classifiers that operate as Cyber Physical Systems (CPS), to adversarial
attacks. As a proof of concept, we first formulate a DT of a vehicular system
using a deep neural network architecture and then utilize it to launch an
adversarial attack. We attack the DT model by perturbing the input to the
trained model and show how easily the model can be broken with white-box
attacks.
Related papers
- Downstream Transfer Attack: Adversarial Attacks on Downstream Models with Pre-trained Vision Transformers [95.22517830759193]
This paper studies the transferability of such an adversarial vulnerability from a pre-trained ViT model to downstream tasks.
We show that DTA achieves an average attack success rate (ASR) exceeding 90%, surpassing existing methods by a huge margin.
arXiv Detail & Related papers (2024-08-03T08:07:03Z) - Black-box Adversarial Transferability: An Empirical Study in Cybersecurity Perspective [0.0]
In adversarial machine learning, malicious users try to fool the deep learning model by inserting adversarial perturbation inputs into the model during its training or testing phase.
We empirically test the black-box adversarial transferability phenomena in cyber attack detection systems.
The results indicate that any deep learning model is highly susceptible to adversarial attacks, even if the attacker does not have access to the internal details of the target model.
arXiv Detail & Related papers (2024-04-15T06:56:28Z) - Twin Auto-Encoder Model for Learning Separable Representation in Cyberattack Detection [21.581155557707632]
We propose a novel mod called Twin Auto-Encoder (TAE) for cyberattack detection.
Experiment results show the superior accuracy of TAE over state-of-the-art RL models and well-known machine learning algorithms.
arXiv Detail & Related papers (2024-03-22T03:39:40Z) - Unified Physical-Digital Face Attack Detection [66.14645299430157]
Face Recognition (FR) systems can suffer from physical (i.e., print photo) and digital (i.e., DeepFake) attacks.
Previous related work rarely considers both situations at the same time.
We propose a Unified Attack Detection framework based on Vision-Language Models (VLMs)
arXiv Detail & Related papers (2024-01-31T09:38:44Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Evaluating Deep Learning Models and Adversarial Attacks on
Accelerometer-Based Gesture Authentication [6.961253535504979]
We use a deep convolutional generative adversarial network (DC-GAN) to create adversarial samples.
We show that our deep learning model is surprisingly robust to such an attack scenario.
arXiv Detail & Related papers (2021-10-03T00:15:50Z) - Learning to Detect: A Data-driven Approach for Network Intrusion
Detection [17.288512506016612]
We perform a comprehensive study on NSL-KDD, a network traffic dataset, by visualizing patterns and employing different learning-based models to detect cyber attacks.
Unlike previous shallow learning and deep learning models that use the single learning model approach for intrusion detection, we adopt a hierarchy strategy.
We demonstrate the advantage of the unsupervised representation learning model in binary intrusion detection tasks.
arXiv Detail & Related papers (2021-08-18T21:19:26Z) - Practical No-box Adversarial Attacks against DNNs [31.808770437120536]
We investigate no-box adversarial examples, where the attacker can neither access the model information or the training set nor query the model.
We propose three mechanisms for training with a very small dataset and find that prototypical reconstruction is the most effective.
Our approach significantly diminishes the average prediction accuracy of the system to only 15.40%, which is on par with the attack that transfers adversarial examples from a pre-trained Arcface model.
arXiv Detail & Related papers (2020-12-04T11:10:03Z) - Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer
Learning [60.784641458579124]
We show that fine-tuning effectively enhances model robustness under white-box FGSM attacks.
We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model.
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
arXiv Detail & Related papers (2020-08-25T15:04:32Z) - DaST: Data-free Substitute Training for Adversarial Attacks [55.76371274622313]
We propose a data-free substitute training method (DaST) to obtain substitute models for adversarial black-box attacks.
To achieve this, DaST utilizes specially designed generative adversarial networks (GANs) to train the substitute models.
Experiments demonstrate the substitute models can achieve competitive performance compared with the baseline models.
arXiv Detail & Related papers (2020-03-28T04:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.