Design and Analysis of Robust Resilient Diffusion over Multi-Task
Networks Against Byzantine Attacks
- URL: http://arxiv.org/abs/2206.12749v1
- Date: Sat, 25 Jun 2022 22:58:51 GMT
- Title: Design and Analysis of Robust Resilient Diffusion over Multi-Task
Networks Against Byzantine Attacks
- Authors: Tao Yu, Rodrigo C. de Lamare and Yi Yu
- Abstract summary: This paper studies distributed diffusion adaptation over clustered multi-task networks in the presence of impulsive interferences and Byzantine attacks.
We develop a robust resilient diffusion least mean Geman-McClure-estimation (RDLMG) algorithm based on the cost function used by the Geman-McClure estimator.
Numerical results evaluate the proposed RDLMG algorithm in applications to multi-target localization and multi-task spectrum sensing.
- Score: 38.740376971569695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper studies distributed diffusion adaptation over clustered multi-task
networks in the presence of impulsive interferences and Byzantine attacks. We
develop a robust resilient diffusion least mean Geman-McClure-estimation
(RDLMG) algorithm based on the cost function used by the Geman-McClure
estimator, which can reduce the sensitivity to large outliers and make the
algorithm robust under impulsive interferences. Moreover, the mean sub-sequence
reduced method, in which each node discards the extreme value information of
cost contributions received from its neighbors, can make the network resilient
against Byzantine attacks. In this regard, the proposed RDLMG algorithm ensures
that all normal nodes converge to their ideal states with cooperation among
nodes. A statistical analysis of the RDLMG algorithm is also carried out in
terms of mean and mean-square performances. Numerical results evaluate the
proposed RDLMG algorithm in applications to multi-target localization and
multi-task spectrum sensing.
Related papers
- Breaking the Curse of Multiagency in Robust Multi-Agent Reinforcement Learning [37.80275600302316]
distributionally robust Markov games (RMGs) have been proposed to enhance robustness in MARL.
A notorious yet open challenge is if RMGs can escape the curse of multiagency.
This is the first algorithm to break the curse of multiagency for RMGs.
arXiv Detail & Related papers (2024-09-30T08:09:41Z) - Distributionally Robust Inverse Reinforcement Learning for Identifying Multi-Agent Coordinated Sensing [13.440621354486906]
We derive a minimax distributionally robust inverse reinforcement learning (IRL) algorithm to reconstruct the utility functions of a multi-agent sensing system.
We prove the equivalence between this robust estimation and a semi-infinite optimization reformulation, and we propose a consistent algorithm to compute solutions.
arXiv Detail & Related papers (2024-09-22T17:44:32Z) - Rethinking Clustered Federated Learning in NOMA Enhanced Wireless
Networks [60.09912912343705]
This study explores the benefits of integrating the novel clustered federated learning (CFL) approach with non-independent and identically distributed (non-IID) datasets.
A detailed theoretical analysis of the generalization gap that measures the degree of non-IID in the data distribution is presented.
Solutions to address the challenges posed by non-IID conditions are proposed with the analysis of the properties.
arXiv Detail & Related papers (2024-03-05T17:49:09Z) - High Efficiency Inference Accelerating Algorithm for NOMA-based Mobile
Edge Computing [23.88527790721402]
Splitting the inference model between device, edge server, and cloud can improve the performance of EI greatly.
NOMA, which is the key supporting technologies of B5G/6G, can achieve massive connections and high spectrum efficiency.
We propose the effective communication and computing resource allocation algorithm to accelerate the model inference at edge.
arXiv Detail & Related papers (2023-12-26T02:05:52Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Coding for Distributed Multi-Agent Reinforcement Learning [12.366967700730449]
Stragglers arise frequently in a distributed learning system, due to the existence of various system disturbances.
We propose a coded distributed learning framework, which speeds up the training of MARL algorithms in the presence of stragglers.
Different coding schemes, including maximum distance separable (MDS)code, random sparse code, replication-based code, and regular low density parity check (LDPC) code are also investigated.
arXiv Detail & Related papers (2021-01-07T00:22:34Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Study of Diffusion Normalized Least Mean M-estimate Algorithms [0.8749675983608171]
This work proposes diffusion normalized least mean M-estimate algorithm based on the modified Huber function.
We analyze the transient, steady-state and stability behaviors of the algorithms in a unified framework.
Simulations in various impulsive noise scenarios show that the proposed algorithms are superior to some existing diffusion algorithms.
arXiv Detail & Related papers (2020-04-20T00:28:41Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.