A Robust Adversary Detection-Deactivation Method for Metaverse-oriented
Collaborative Deep Learning
- URL: http://arxiv.org/abs/2401.01895v1
- Date: Sat, 21 Oct 2023 06:45:18 GMT
- Title: A Robust Adversary Detection-Deactivation Method for Metaverse-oriented
Collaborative Deep Learning
- Authors: Pengfei Li, Zhibo Zhang, Ameena S. Al-Sumaiti, Naoufel Werghi, and
Chan Yeob Yeun
- Abstract summary: This paper proposes an adversary detection-deactivation method, which can limit and isolate the access of potential malicious participants.
A detailed protection analysis has been conducted on a Multiview CDL case, and results show that the protocol can effectively prevent harmful access by manner analysis.
- Score: 13.131323206843733
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Metaverse is trending to create a digital circumstance that can transfer the
real world to an online platform supported by large quantities of real-time
interactions. Pre-trained Artificial Intelligence (AI) models are demonstrating
their increasing capability in aiding the metaverse to achieve an excellent
response with negligible delay, and nowadays, many large models are
collaboratively trained by various participants in a manner named collaborative
deep learning (CDL). However, several security weaknesses can threaten the
safety of the CDL training process, which might result in fatal attacks to
either the pre-trained large model or the local sensitive data sets possessed
by an individual entity. In CDL, malicious participants can hide within the
major innocent and silently uploads deceptive parameters to degenerate the
model performance, or they can abuse the downloaded parameters to construct a
Generative Adversarial Network (GAN) to acquire the private information of
others illegally. To compensate for these vulnerabilities, this paper proposes
an adversary detection-deactivation method, which can limit and isolate the
access of potential malicious participants, quarantine and disable the
GAN-attack or harmful backpropagation of received threatening gradients. A
detailed protection analysis has been conducted on a Multiview CDL case, and
results show that the protocol can effectively prevent harmful access by
heuristic manner analysis and can protect the existing model by swiftly
checking received gradients using only one low-cost branch with an embedded
firewall.
Related papers
- Protecting Feed-Forward Networks from Adversarial Attacks Using Predictive Coding [0.20718016474717196]
An adversarial example is a modified input image designed to cause a Machine Learning (ML) model to make a mistake.
This study presents a practical and effective solution -- using predictive coding networks (PCnets) as an auxiliary step for adversarial defence.
arXiv Detail & Related papers (2024-10-31T21:38:05Z) - Aggressive or Imperceptible, or Both: Network Pruning Assisted Hybrid Byzantines in Federated Learning [6.384138583754105]
Federated learning (FL) has been introduced to enable a large number of clients, possibly mobile devices, to collaborate on generating a generalized machine learning model.
Due to the participation of a large number of clients, it is often difficult to profile and verify each client, which leads to a security threat.
We introduce a hybrid sparse Byzantine attack that is composed of two parts: one exhibiting a sparse nature and attacking only certain NN locations with higher sensitivity, and the other being more silent but accumulating over time.
arXiv Detail & Related papers (2024-04-09T11:42:32Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - On the Robustness of Split Learning against Adversarial Attacks [15.169426253647362]
Split learning enables collaborative deep learning model training by avoiding direct sharing of raw data and model details.
Existing adversarial attacks mostly focus on the centralized setting instead of the collaborative setting.
This paper aims to evaluate the robustness of split learning against adversarial attacks.
arXiv Detail & Related papers (2023-07-16T01:45:00Z) - Deep PackGen: A Deep Reinforcement Learning Framework for Adversarial
Network Packet Generation [3.5574619538026044]
Recent advancements in artificial intelligence (AI) and machine learning (ML) algorithms have enhanced the security posture of cybersecurity operations centers (defenders)
Recent studies have found that the perturbation of flow-based and packet-based features can deceive ML models, but these approaches have limitations.
Our framework, Deep PackGen, employs deep reinforcement learning to generate adversarial packets and aims to overcome the limitations of approaches in the literature.
arXiv Detail & Related papers (2023-05-18T15:32:32Z) - On the Evaluation of User Privacy in Deep Neural Networks using Timing
Side Channel [14.350301915592027]
We identify and report a novel data-dependent timing side-channel leakage (termed Class Leakage) in Deep Learning (DL) implementations.
We demonstrate a practical inference-time attack where an adversary with user privilege and hard-label blackbox access to an ML can exploit Class Leakage.
We develop an easy-to-implement countermeasure by making a constant-time branching operation that alleviates the Class Leakage.
arXiv Detail & Related papers (2022-08-01T19:38:16Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.