Bayesian Variational Federated Learning and Unlearning in Decentralized
Networks
- URL: http://arxiv.org/abs/2104.03834v1
- Date: Thu, 8 Apr 2021 15:18:35 GMT
- Title: Bayesian Variational Federated Learning and Unlearning in Decentralized
Networks
- Authors: Jinu Gong, Osvaldo Simeone, Joonhyuk Kang
- Abstract summary: This paper studies federated learning and unlearning in a decentralized network within a Bayesian framework.
It specifically develops federated variational inference (VI) solutions based on the decentralized solution of local free energy minimization problems.
- Score: 37.62407138487514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Bayesian learning offers a principled framework for the definition
of collaborative training algorithms that are able to quantify epistemic
uncertainty and to produce trustworthy decisions. Upon the completion of
collaborative training, an agent may decide to exercise her legal "right to be
forgotten", which calls for her contribution to the jointly trained model to be
deleted and discarded. This paper studies federated learning and unlearning in
a decentralized network within a Bayesian framework. It specifically develops
federated variational inference (VI) solutions based on the decentralized
solution of local free energy minimization problems within exponential-family
models and on local gossip-driven communication. The proposed protocols are
demonstrated to yield efficient unlearning mechanisms.
Related papers
- Event-Triggered Gossip for Distributed Learning [61.70659996356528]
We develop a new event-triggered gossip framework for distributed learning to reduce inter-node communication.<n>We analyze bf71.61% with only a marginal performance loss, compared with the conventional full-text-of-the-art distributed learning methods.
arXiv Detail & Related papers (2026-02-22T10:13:43Z) - Federated Concept-Based Models: Interpretable models with distributed supervision [18.11830748487309]
Concept-based models (CMs) enhance interpretability in deep learning by grounding predictions in human-understandable concepts.<n>Yet, concept annotations are expensive to obtain and rarely available at scale within a single data source.<n>We propose Federated Concept-based Models (F-CMs), a new methodology for deploying CMs in evolving FL settings.
arXiv Detail & Related papers (2026-02-04T00:04:50Z) - Toward a Sustainable Federated Learning Ecosystem: A Practical Least Core Mechanism for Payoff Allocation [71.86087908416255]
We introduce a payoff allocation framework based on the least core (LC) concept.<n>Unlike traditional methods, the LC prioritizes the cohesion of the federation by minimizing the maximum dissatisfaction.<n>Case studies in federated intrusion detection demonstrate that our mechanism correctly identifies pivotal contributors and strategic alliances.
arXiv Detail & Related papers (2026-02-03T11:10:50Z) - Proof-of-Data: A Consensus Protocol for Collaborative Intelligence [6.107950942979923]
We propose a blockchain-based Byzantine fault-tolerant federated learning framework based on a novel Proof-of-Data (PoD) consensus protocol.
PoD is able to enjoy the benefit of learning efficiency and system liveliness from societal-scale PoW-style learning.
To mitigate false reward claims by data forgery from Byzantine attacks, a privacy-aware data verification and contribution-based reward allocation mechanism is designed to complete the framework.
arXiv Detail & Related papers (2025-01-06T12:27:59Z) - Protocol Learning, Decentralized Frontier Risk and the No-Off Problem [56.74434512241989]
We identify a third paradigm - Protocol Learning - where models are trained across decentralized networks of incentivized participants.
This approach has the potential to aggregate orders of magnitude more computational resources than any single centralized entity.
It also introduces novel challenges: heterogeneous and unreliable nodes, malicious participants, the need for unextractable models to preserve incentives, and complex governance dynamics.
arXiv Detail & Related papers (2024-12-10T19:53:50Z) - Decentralized multi-agent reinforcement learning algorithm using a cluster-synchronized laser network [1.124958340749622]
We propose a photonic-based decision-making algorithm to address the competitive multi-armed bandit problem.
Our numerical simulations demonstrate that chaotic oscillations and cluster synchronization of optically coupled lasers, along with our proposed decentralized coupling adjustment, efficiently balance exploration and exploitation.
arXiv Detail & Related papers (2024-07-12T09:38:47Z) - Initialisation and Network Effects in Decentralised Federated Learning [1.5961625979922607]
Decentralised federated learning enables collaborative training of individual machine learning models on a distributed network of communicating devices.
This approach avoids central coordination, enhances data privacy and eliminates the risk of a single point of failure.
We propose a strategy for uncoordinated initialisation of the artificial neural networks based on the distribution of eigenvector centralities of the underlying communication network.
arXiv Detail & Related papers (2024-03-23T14:24:36Z) - Imitation Learning based Alternative Multi-Agent Proximal Policy
Optimization for Well-Formed Swarm-Oriented Pursuit Avoidance [15.498559530889839]
In this paper, we put forward a decentralized learning based Alternative Multi-Agent Proximal Policy Optimization (IA-MAPPO) algorithm to execute the pursuit avoidance task in well-formed swarm.
We utilize imitation learning to decentralize the formation controller, so as to reduce the communication overheads and enhance the scalability.
The simulation results validate the effectiveness of IA-MAPPO and extensive ablation experiments further show the performance comparable to a centralized solution with significant decrease in communication overheads.
arXiv Detail & Related papers (2023-11-06T06:58:16Z) - Networked Communication for Decentralised Agents in Mean-Field Games [59.01527054553122]
We introduce networked communication to the mean-field game framework.
We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases.
arXiv Detail & Related papers (2023-06-05T10:45:39Z) - When Decentralized Optimization Meets Federated Learning [41.58479981773202]
Federated learning is a new learning paradigm for extracting knowledge from distributed data.
Most existing federated learning approaches concentrate on the centralized setting, which is vulnerable to a single-point failure.
An alternative strategy for addressing this issue is the decentralized communication topology.
arXiv Detail & Related papers (2023-06-05T03:51:14Z) - Event-Triggered Decentralized Federated Learning over
Resource-Constrained Edge Devices [12.513477328344255]
Federated learning (FL) is a technique for distributed machine learning (ML)
In traditional FL algorithms, trained models at the edge are periodically sent to a central server for aggregation.
We develop a novel methodology for fully decentralized FL, where devices conduct model aggregation via cooperative consensus formation.
arXiv Detail & Related papers (2022-11-23T00:04:05Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Consensus Control for Decentralized Deep Learning [72.50487751271069]
Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters.
We show in theory that when the training consensus distance is lower than a critical quantity, decentralized training converges as fast as the centralized counterpart.
Our empirical insights allow the principled design of better decentralized training schemes that mitigate the performance drop.
arXiv Detail & Related papers (2021-02-09T13:58:33Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.