Low-Communication Resilient Distributed Estimation Algorithm Based on Memory Mechanism
- URL: http://arxiv.org/abs/2508.02705v1
- Date: Wed, 30 Jul 2025 04:05:15 GMT
- Title: Low-Communication Resilient Distributed Estimation Algorithm Based on Memory Mechanism
- Authors: Wei Li, Limei Hu, Feng Chen, Ye Yao,
- Abstract summary: In adversarial networks, the accurate estimation of unknown parameters is hindered by attacked nodes or links.<n>This brief proposes a low-communication resilient distributed estimation algorithm.<n> simulation results demonstrate that the proposed algorithm achieves superior performance with less communication cost compared to other algorithms.
- Score: 6.161650662586304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multi-task adversarial networks, the accurate estimation of unknown parameters in a distributed algorithm is hindered by attacked nodes or links. To tackle this challenge, this brief proposes a low-communication resilient distributed estimation algorithm. First, a node selection strategy based on reputation is introduced that allows nodes to communicate with more reliable subset of neighbors. Subsequently, to discern trustworthy intermediate estimates, the Weighted Support Vector Data Description (W-SVDD) model is employed to train the memory data. This trained model contributes to reinforce the resilience of the distributed estimation process against the impact of attacked nodes or links. Additionally, an event-triggered mechanism is introduced to minimize ineffective updates to the W-SVDD model, and a suitable threshold is derived based on assumptions. The convergence of the algorithm is analyzed. Finally, simulation results demonstrate that the proposed algorithm achieves superior performance with less communication cost compared to other algorithms.
Related papers
- Latent Diffusion Model Based Denoising Receiver for 6G Semantic Communication: From Stochastic Differential Theory to Application [11.385703484113552]
We propose a novel semantic communication framework empowered by generative artificial intelligence (GAI)<n>A latent diffusion model (LDM)-based semantic communication framework is proposed that combines a variational autoencoder for semantic features extraction.<n>The proposed system is a training-free framework that supports zero-shot generalization, and achieves superior performance under low-SNR and out-of-distribution conditions.
arXiv Detail & Related papers (2025-06-06T03:20:32Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Contraction-Guided Adaptive Partitioning for Reachability Analysis of
Neural Network Controlled Systems [5.359060261460183]
We present a contraction-guided adaptive partitioning algorithm for improving interval-valued reachable set estimates in a nonlinear feedback loop.
By leveraging a decoupling of the neural network verification step and reachability partitioning layers, the algorithm can provide accuracy improvements for little computational cost.
We report a sizable improvement in the accuracy of reachable set estimation in a fraction of the runtime as compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-04-07T14:43:21Z) - Asynchronous Bayesian Learning over a Network [18.448653247778143]
We present a practical asynchronous data fusion model for networked agents to perform distributed Bayesian learning without sharing raw data.
Our algorithm uses a gossip-based approach where pairs of randomly selected agents employ unadjusted Langevin dynamics for parameter sampling.
We introduce an event-triggered mechanism to further reduce communication between gossiping agents.
arXiv Detail & Related papers (2022-11-16T01:21:36Z) - Large-Scale Sequential Learning for Recommender and Engineering Systems [91.3755431537592]
In this thesis, we focus on the design of an automatic algorithms that provide personalized ranking by adapting to the current conditions.
For the former, we propose novel algorithm called SAROS that take into account both kinds of feedback for learning over the sequence of interactions.
The proposed idea of taking into account the neighbour lines shows statistically significant results in comparison with the initial approach for faults detection in power grid.
arXiv Detail & Related papers (2022-05-13T21:09:41Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Decentralized Statistical Inference with Unrolled Graph Neural Networks [26.025935320024665]
We propose a learning-based framework, which unrolls decentralized optimization algorithms into graph neural networks (GNNs)
By minimizing the recovery error via end-to-end training, this learning-based framework resolves the model mismatch issue.
Our convergence analysis reveals that the learned model parameters may accelerate the convergence and reduce the recovery error to a large extent.
arXiv Detail & Related papers (2021-04-04T07:52:34Z) - Robust and integrative Bayesian neural networks for likelihood-free
parameter inference [0.0]
State-of-the-art neural network-based methods for learning summary statistics have delivered promising results for simulation-based likelihood-free parameter inference.
This work proposes a robust integrated approach that learns summary statistics using Bayesian neural networks, and directly estimates the posterior density using categorical distributions.
arXiv Detail & Related papers (2021-02-12T13:45:23Z) - Distributed ADMM with Synergetic Communication and Computation [39.930150618785355]
We propose a novel distributed alternating direction method of multipliers (ADMM) algorithm with synergetic communication and computation.
In the proposed algorithm, each node interacts with only part of its neighboring nodes, the number of which is progressively determined according to a searching procedure.
We prove the convergence of the proposed algorithm and provide an upper bound of the convergence variance brought by randomness.
arXiv Detail & Related papers (2020-09-29T08:36:26Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.