Generative Adversarial Network-Driven Detection of Adversarial Tasks in
Mobile Crowdsensing
- URL: http://arxiv.org/abs/2202.07802v1
- Date: Wed, 16 Feb 2022 00:23:25 GMT
- Title: Generative Adversarial Network-Driven Detection of Adversarial Tasks in
Mobile Crowdsensing
- Authors: Zhiyan Chen and Burak Kantarci
- Abstract summary: Crowdsensing systems are vulnerable to various attacks as they build on non-dedicated and ubiquitous properties.
Previous works suggest that GAN-based attacks exhibit more crucial devastation than empirically designed attack samples.
This paper aims to detect intelligently designed illegitimate sensing service requests by integrating a GAN-based model.
- Score: 5.675436513661266
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mobile Crowdsensing systems are vulnerable to various attacks as they build
on non-dedicated and ubiquitous properties. Machine learning (ML)-based
approaches are widely investigated to build attack detection systems and ensure
MCS systems security. However, adversaries that aim to clog the sensing
front-end and MCS back-end leverage intelligent techniques, which are
challenging for MCS platform and service providers to develop appropriate
detection frameworks against these attacks. Generative Adversarial Networks
(GANs) have been applied to generate synthetic samples, that are extremely
similar to the real ones, deceiving classifiers such that the synthetic samples
are indistinguishable from the originals. Previous works suggest that GAN-based
attacks exhibit more crucial devastation than empirically designed attack
samples, and result in low detection rate at the MCS platform. With this in
mind, this paper aims to detect intelligently designed illegitimate sensing
service requests by integrating a GAN-based model. To this end, we propose a
two-level cascading classifier that combines the GAN discriminator with a
binary classifier to prevent adversarial fake tasks. Through simulations, we
compare our results to a single-level binary classifier, and the numeric
results show that proposed approach raises Adversarial Attack Detection Rate
(AADR), from $0\%$ to $97.5\%$ by KNN/NB, from $45.9\%$ to $100\%$ by Decision
Tree. Meanwhile, with two-levels classifiers, Original Attack Detection Rate
(OADR) improves for the three binary classifiers, with comparison, such as NB
from $26.1\%$ to $61.5\%$.
Related papers
- LLM-based Continuous Intrusion Detection Framework for Next-Gen Networks [0.7100520098029439]
The framework employs a transformer encoder architecture, which captures hidden patterns in a bidirectional manner to differentiate between malicious and legitimate traffic.
The system incrementally identifies unknown attack types by leveraging a Gaussian Mixture Model (GMM) to cluster features derived from high-dimensional BERT embeddings.
Even after integrating additional unknown attack clusters, the framework continues to perform at a high level, achieving 95.6% in both classification accuracy and recall.
arXiv Detail & Related papers (2024-11-04T18:12:14Z) - Comprehensive Botnet Detection by Mitigating Adversarial Attacks, Navigating the Subtleties of Perturbation Distances and Fortifying Predictions with Conformal Layers [1.6001193161043425]
Botnets are computer networks controlled by malicious actors that present significant cybersecurity challenges.
This research addresses the sophisticated adversarial manipulations posed by attackers, aiming to undermine machine learning-based botnet detection systems.
We introduce a flow-based detection approach, leveraging machine learning and deep learning algorithms trained on the ISCX and ISOT datasets.
arXiv Detail & Related papers (2024-09-01T08:53:21Z) - AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Malicious Agent Detection for Robust Multi-Agent Collaborative Perception [52.261231738242266]
Multi-agent collaborative (MAC) perception is more vulnerable to adversarial attacks than single-agent perception.
We propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception.
We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X.
arXiv Detail & Related papers (2023-10-18T11:36:42Z) - Audio Anti-spoofing Using a Simple Attention Module and Joint
Optimization Based on Additive Angular Margin Loss and Meta-learning [43.519717601587864]
This study introduces a simple attention module to infer 3-dim attention weights for the feature map in a convolutional layer.
We propose a joint optimization approach based on the weighted additive angular margin loss for binary classification.
Our proposed approach delivers a competitive result with a pooled EER of 0.99% and min t-DCF of 0.0289.
arXiv Detail & Related papers (2022-11-17T21:25:29Z) - Mixture GAN For Modulation Classification Resiliency Against Adversarial
Attacks [55.92475932732775]
We propose a novel generative adversarial network (GAN)-based countermeasure approach.
GAN-based aims to eliminate the adversarial attack examples before feeding to the DNN-based classifier.
Simulation results show the effectiveness of our proposed defense GAN so that it could enhance the accuracy of the DNN-based AMC under adversarial attacks to 81%, approximately.
arXiv Detail & Related papers (2022-05-29T22:30:32Z) - Mitigating Closed-model Adversarial Examples with Bayesian Neural
Modeling for Enhanced End-to-End Speech Recognition [18.83748866242237]
We focus on a rigorous and empirical "closed-model adversarial robustness" setting.
We propose an advanced Bayesian neural network (BNN) based adversarial detector.
We improve detection rate by +2.77 to +5.42% (relative +3.03 to +6.26%) and reduce the word error rate by 5.02 to 7.47% on LibriSpeech datasets.
arXiv Detail & Related papers (2022-02-17T09:17:58Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Detection of Adversarial Supports in Few-shot Classifiers Using Feature
Preserving Autoencoders and Self-Similarity [89.26308254637702]
We propose a detection strategy to highlight adversarial support sets.
We make use of feature preserving autoencoder filtering and also the concept of self-similarity of a support set to perform this detection.
Our method is attack-agnostic and also the first to explore detection for few-shot classifiers to the best of our knowledge.
arXiv Detail & Related papers (2020-12-09T14:13:41Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.