Poisoning Behavioral-based Worker Selection in Mobile Crowdsensing using Generative Adversarial Networks
- URL: http://arxiv.org/abs/2506.05403v1
- Date: Wed, 04 Jun 2025 04:48:51 GMT
- Title: Poisoning Behavioral-based Worker Selection in Mobile Crowdsensing using Generative Adversarial Networks
- Authors: Ruba Nasser, Ahmed Alagha, Shakti Singh, Rabeb Mizouni, Hadi Otrok, Jamal Bentahar,
- Abstract summary: This work proposes an adversarial attack targeting behavioral-based selection models in Mobile Crowdsensing (MCS)<n>The proposed attack leverages Generative Adrial Networks (GANs) to generate poisoning points that can mislead the models during the training stage without being detected.
- Score: 14.727690033873657
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the widespread adoption of Artificial intelligence (AI), AI-based tools and components are becoming omnipresent in today's solutions. However, these components and tools are posing a significant threat when it comes to adversarial attacks. Mobile Crowdsensing (MCS) is a sensing paradigm that leverages the collective participation of workers and their smart devices to collect data. One of the key challenges faced at the selection stage is ensuring task completion due to workers' varying behavior. AI has been utilized to tackle this challenge by building unique models for each worker to predict their behavior. However, the integration of AI into the system introduces vulnerabilities that can be exploited by malicious insiders to reduce the revenue obtained by victim workers. This work proposes an adversarial attack targeting behavioral-based selection models in MCS. The proposed attack leverages Generative Adversarial Networks (GANs) to generate poisoning points that can mislead the models during the training stage without being detected. This way, the potential damage introduced by GANs on worker selection in MCS can be anticipated. Simulation results using a real-life dataset show the effectiveness of the proposed attack in compromising the victim workers' model and evading detection by an outlier detector, compared to a benchmark. In addition, the impact of the attack on reducing the payment obtained by victim workers is evaluated.
Related papers
- Screen Hijack: Visual Poisoning of VLM Agents in Mobile Environments [61.808686396077036]
We present GHOST, the first clean-label backdoor attack specifically designed for mobile agents built upon vision-language models (VLMs)<n>Our method manipulates only the visual inputs of a portion of the training samples without altering their corresponding labels or instructions.<n>We evaluate our method across six real-world Android apps and three VLM architectures adapted for mobile use.
arXiv Detail & Related papers (2025-06-16T08:09:32Z) - Data-Free Model-Related Attacks: Unleashing the Potential of Generative AI [21.815149263785912]
We introduce the use of generative AI for facilitating model-related attacks, including model extraction, membership inference, and model inversion.<n>Our study reveals that adversaries can launch a variety of model-related attacks against both image and text models in a data-free and black-box manner.<n>This research serves as an important early warning to the community about the potential risks associated with generative AI-powered attacks on deep learning models.
arXiv Detail & Related papers (2025-01-28T03:12:57Z) - A Robust Adversary Detection-Deactivation Method for Metaverse-oriented
Collaborative Deep Learning [13.131323206843733]
This paper proposes an adversary detection-deactivation method, which can limit and isolate the access of potential malicious participants.
A detailed protection analysis has been conducted on a Multiview CDL case, and results show that the protocol can effectively prevent harmful access by manner analysis.
arXiv Detail & Related papers (2023-10-21T06:45:18Z) - Adversarial training with informed data selection [53.19381941131439]
Adrial training is the most efficient solution to defend the network against these malicious attacks.
This work proposes a data selection strategy to be applied in the mini-batch training.
The simulation results show that a good compromise can be obtained regarding robustness and standard accuracy.
arXiv Detail & Related papers (2023-01-07T12:09:50Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - AdIoTack: Quantifying and Refining Resilience of Decision Tree Ensemble
Inference Models against Adversarial Volumetric Attacks on IoT Networks [1.1172382217477126]
We present AdIoTack, a system that highlights vulnerabilities of decision trees against adversarial attacks.
To assess the model for the worst-case scenario, AdIoTack performs white-box adversarial learning to launch successful volumetric attacks.
We demonstrate how the model detects all non-adversarial volumetric attacks on IoT devices while missing many adversarial ones.
arXiv Detail & Related papers (2022-03-18T08:18:03Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Adversarial Attack Attribution: Discovering Attributable Signals in
Adversarial ML Attacks [0.7883722807601676]
Even production systems, such as self-driving cars and ML-as-a-service offerings, are susceptible to adversarial inputs.
Can perturbed inputs be attributed to the methods used to generate the attack?
We introduce the concept of adversarial attack attribution and create a simple supervised learning experimental framework to examine the feasibility of discovering attributable signals in adversarial attacks.
arXiv Detail & Related papers (2021-01-08T08:16:41Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z) - Adversarial Attacks on Machine Learning Cybersecurity Defences in
Industrial Control Systems [2.86989372262348]
This paper explores how adversarial learning can be used to target supervised models by generating adversarial samples.
It also explores how such samples can support the robustness of supervised models using adversarial training.
Overall, the classification performance of two widely used classifiers, Random Forest and J48, decreased by 16 and 20 percentage points when adversarial samples were present.
arXiv Detail & Related papers (2020-04-10T12:05:33Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.