Investigating the significance of adversarial attacks and their relation
to interpretability for radar-based human activity recognition systems
- URL: http://arxiv.org/abs/2101.10562v1
- Date: Tue, 26 Jan 2021 05:16:16 GMT
- Title: Investigating the significance of adversarial attacks and their relation
to interpretability for radar-based human activity recognition systems
- Authors: Utku Ozbulak, Baptist Vandersmissen, Azarakhsh Jalalvand, Ivo
Couckuyt, Arnout Van Messem, Wesley De Neve
- Abstract summary: We show that radar-based CNNs are susceptible to both white- and black-box adversarial attacks.
We also expose the existence of an extreme adversarial attack case, where it is possible to change the prediction made by the radar-based CNNs.
- Score: 2.081492937901262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given their substantial success in addressing a wide range of computer vision
challenges, Convolutional Neural Networks (CNNs) are increasingly being used in
smart home applications, with many of these applications relying on the
automatic recognition of human activities. In this context, low-power radar
devices have recently gained in popularity as recording sensors, given that the
usage of these devices allows mitigating a number of privacy concerns, a key
issue when making use of conventional video cameras. Another concern that is
often cited when designing smart home applications is the resilience of these
applications against cyberattacks. It is, for instance, well-known that the
combination of images and CNNs is vulnerable against adversarial examples,
mischievous data points that force machine learning models to generate wrong
classifications during testing time. In this paper, we investigate the
vulnerability of radar-based CNNs to adversarial attacks, and where these
radar-based CNNs have been designed to recognize human gestures. Through
experiments with four unique threat models, we show that radar-based CNNs are
susceptible to both white- and black-box adversarial attacks. We also expose
the existence of an extreme adversarial attack case, where it is possible to
change the prediction made by the radar-based CNNs by only perturbing the
padding of the inputs, without touching the frames where the action itself
occurs. Moreover, we observe that gradient-based attacks exercise perturbation
not randomly, but on important features of the input data. We highlight these
important features by making use of Grad-CAM, a popular neural network
interpretability method, hereby showing the connection between adversarial
perturbation and prediction interpretability.
Related papers
- When Side-Channel Attacks Break the Black-Box Property of Embedded
Artificial Intelligence [0.8192907805418583]
deep neural networks (DNNs) are subject to malicious examples designed in a way to fool the network while being undetectable to the human observer.
We propose an architecture-agnostic attack which solve this constraint by extracting the logits.
Our method combines hardware and software attacks, by performing a side-channel attack that exploits electromagnetic leakages.
arXiv Detail & Related papers (2023-11-23T13:41:22Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - Real-time Over-the-air Adversarial Perturbations for Digital
Communications using Deep Neural Networks [0.0]
adversarial perturbations can be used by RF communications systems to avoid reactive-jammers and interception systems.
This work attempts to bridge this gap by defining class-specific and sample-independent adversarial perturbations.
We demonstrate the effectiveness of these attacks over-the-air across a physical channel using software-defined radios.
arXiv Detail & Related papers (2022-02-20T14:50:52Z) - Demotivate adversarial defense in remote sensing [0.0]
We study adversarial retraining and adversarial regularization as adversarial defenses to this purpose.
We show through several experiments on public remote sensing datasets that adversarial robustness seems uncorrelated to geographic and over-fitting robustness.
arXiv Detail & Related papers (2021-05-28T15:04:37Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - The Vulnerability of Semantic Segmentation Networks to Adversarial
Attacks in Autonomous Driving: Enhancing Extensive Environment Sensing [25.354929620151367]
This article aims to illuminate the vulnerability aspects of CNNs used for semantic segmentation with respect to adversarial attacks.
We aim to clarify the advantages and disadvantages associated with applying CNNs for environment perception in autonomous driving.
arXiv Detail & Related papers (2021-01-11T14:43:11Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.