Generative Adversarial Network-Driven Detection of Adversarial Tasks in
Mobile Crowdsensing
- URL: http://arxiv.org/abs/2202.07802v1
- Date: Wed, 16 Feb 2022 00:23:25 GMT
- Title: Generative Adversarial Network-Driven Detection of Adversarial Tasks in
Mobile Crowdsensing
- Authors: Zhiyan Chen and Burak Kantarci
- Abstract summary: Crowdsensing systems are vulnerable to various attacks as they build on non-dedicated and ubiquitous properties.
Previous works suggest that GAN-based attacks exhibit more crucial devastation than empirically designed attack samples.
This paper aims to detect intelligently designed illegitimate sensing service requests by integrating a GAN-based model.
- Score: 5.675436513661266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mobile Crowdsensing systems are vulnerable to various attacks as they build
on non-dedicated and ubiquitous properties. Machine learning (ML)-based
approaches are widely investigated to build attack detection systems and ensure
MCS systems security. However, adversaries that aim to clog the sensing
front-end and MCS back-end leverage intelligent techniques, which are
challenging for MCS platform and service providers to develop appropriate
detection frameworks against these attacks. Generative Adversarial Networks
(GANs) have been applied to generate synthetic samples, that are extremely
similar to the real ones, deceiving classifiers such that the synthetic samples
are indistinguishable from the originals. Previous works suggest that GAN-based
attacks exhibit more crucial devastation than empirically designed attack
samples, and result in low detection rate at the MCS platform. With this in
mind, this paper aims to detect intelligently designed illegitimate sensing
service requests by integrating a GAN-based model. To this end, we propose a
two-level cascading classifier that combines the GAN discriminator with a
binary classifier to prevent adversarial fake tasks. Through simulations, we
compare our results to a single-level binary classifier, and the numeric
results show that proposed approach raises Adversarial Attack Detection Rate
(AADR), from $0\%$ to $97.5\%$ by KNN/NB, from $45.9\%$ to $100\%$ by Decision
Tree. Meanwhile, with two-levels classifiers, Original Attack Detection Rate
(OADR) improves for the three binary classifiers, with comparison, such as NB
from $26.1\%$ to $61.5\%$.
Related papers
- Malicious Agent Detection for Robust Multi-Agent Collaborative Perception [52.261231738242266]
Multi-agent collaborative (MAC) perception is more vulnerable to adversarial attacks than single-agent perception.
We propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception.
We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X.
arXiv Detail & Related papers (2023-10-18T11:36:42Z) - Attacking Important Pixels for Anchor-free Detectors [47.524554948433995]
Existing adversarial attacks on object detection focus on attacking anchor-based detectors.
We propose the first adversarial attack dedicated to anchor-free detectors.
Our proposed methods achieve state-of-the-art attack performance and transferability on both object detection and human pose estimation tasks.
arXiv Detail & Related papers (2023-01-26T23:03:03Z) - Audio Anti-spoofing Using a Simple Attention Module and Joint
Optimization Based on Additive Angular Margin Loss and Meta-learning [43.519717601587864]
This study introduces a simple attention module to infer 3-dim attention weights for the feature map in a convolutional layer.
We propose a joint optimization approach based on the weighted additive angular margin loss for binary classification.
Our proposed approach delivers a competitive result with a pooled EER of 0.99% and min t-DCF of 0.0289.
arXiv Detail & Related papers (2022-11-17T21:25:29Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Mitigating Closed-model Adversarial Examples with Bayesian Neural
Modeling for Enhanced End-to-End Speech Recognition [18.83748866242237]
We focus on a rigorous and empirical "closed-model adversarial robustness" setting.
We propose an advanced Bayesian neural network (BNN) based adversarial detector.
We improve detection rate by +2.77 to +5.42% (relative +3.03 to +6.26%) and reduce the word error rate by 5.02 to 7.47% on LibriSpeech datasets.
arXiv Detail & Related papers (2022-02-17T09:17:58Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Selective and Features based Adversarial Example Detection [12.443388374869745]
Security-sensitive applications that relay on Deep Neural Networks (DNNs) are vulnerable to small perturbations crafted to generate Adversarial Examples (AEs)
We propose a novel unsupervised detection mechanism that uses the selective prediction, processing model layers outputs, and knowledge transfer concepts in a multi-task learning setting.
Experimental results show that the proposed approach achieves comparable results to the state-of-the-art methods against tested attacks in white box scenario and better results in black and gray boxes scenarios.
arXiv Detail & Related papers (2021-03-09T11:06:15Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Challenging the adversarial robustness of DNNs based on error-correcting
output codes [33.46319608673487]
ECOC-based networks can be attacked quite easily by introducing a small adversarial perturbation.
adversarial examples can be generated in such a way to achieve high probabilities for the predicted target class.
arXiv Detail & Related papers (2020-03-26T12:14:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.