Harnessing the Power of Federated Learning in Federated Contextual Bandits
- URL: http://arxiv.org/abs/2312.16341v2
- Date: Mon, 16 Sep 2024 01:33:08 GMT
- Title: Harnessing the Power of Federated Learning in Federated Contextual Bandits
- Authors: Chengshuai Shi, Ruida Zhou, Kun Yang, Cong Shen,
- Abstract summary: Federated contextual bandits (FCB) are a pivotal integration of FL and sequential decision-making.
FCB approaches have largely employed their tailored FL components, often deviating from the canonical FL framework.
In particular, a novel FCB design, termed FedIGW, is proposed to leverage a regression-based CB algorithm.
- Score: 20.835106310302876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has demonstrated great potential in revolutionizing distributed machine learning, and tremendous efforts have been made to extend it beyond the original focus on supervised learning. Among many directions, federated contextual bandits (FCB), a pivotal integration of FL and sequential decision-making, has garnered significant attention in recent years. Despite substantial progress, existing FCB approaches have largely employed their tailored FL components, often deviating from the canonical FL framework. Consequently, even renowned algorithms like FedAvg remain under-utilized in FCB, let alone other FL advancements. Motivated by this disconnection, this work takes one step towards building a tighter relationship between the canonical FL study and the investigations on FCB. In particular, a novel FCB design, termed FedIGW, is proposed to leverage a regression-based CB algorithm, i.e., inverse gap weighting. Compared with existing FCB approaches, the proposed FedIGW design can better harness the entire spectrum of FL innovations, which is concretely reflected as (1) flexible incorporation of (both existing and forthcoming) FL protocols; (2) modularized plug-in of FL analyses in performance guarantees; (3) seamless integration of FL appendages (such as personalization, robustness, and privacy). We substantiate these claims through rigorous theoretical analyses and empirical evaluations.
Related papers
- Free-Rider and Conflict Aware Collaboration Formation for Cross-Silo Federated Learning [32.35705737668307]
Federated learning (FL) is a machine learning paradigm that allows multiple FL participants to collaborate on training models without sharing private data.
We propose an optimal FL collaboration formation strategy -- FedEgoists -- which ensures that a FL-PT can benefit from FL if and only if it benefits the FL ecosystem.
We theoretically prove that the FL-PT coalitions formed are optimal since no coalitions can collaborate together to improve the utility of any of their members.
arXiv Detail & Related papers (2024-10-25T06:13:26Z) - Towards Understanding Generalization and Stability Gaps between Centralized and Decentralized Federated Learning [57.35402286842029]
We show that centralized learning always generalizes better than decentralized learning (DFL)
We also conduct experiments on several common setups in FL to validate that our theoretical analysis is consistent with experimental phenomena and contextually valid in several general and practical scenarios.
arXiv Detail & Related papers (2023-10-05T11:09:42Z) - Understanding How Consistency Works in Federated Learning via Stage-wise
Relaxed Initialization [84.42306265220274]
Federated learning (FL) is a distributed paradigm that coordinates massive local clients to collaboratively train a global model.
Previous works have implicitly studied that FL suffers from the client-drift'' problem, which is caused by the inconsistent optimum across local clients.
To alleviate the negative impact of the client drift'' and explore its substance in FL, we first design an efficient FL algorithm textitFedInit.
arXiv Detail & Related papers (2023-06-09T06:55:15Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - Bayesian Federated Learning: A Survey [54.40136267717288]
Federated learning (FL) demonstrates its advantages in integrating distributed infrastructure, communication, computing and learning in a privacy-preserving manner.
The robustness and capabilities of existing FL methods are challenged by limited and dynamic data and conditions.
BFL has emerged as a promising approach to address these issues.
arXiv Detail & Related papers (2023-04-26T03:41:17Z) - Making Batch Normalization Great in Federated Deep Learning [32.81480654534734]
Batch Normalization (BN) is widely used in centralized deep learning to improve convergence and generalization.
Prior work has observed that training with BN could hinder performance and suggested replacing it with Group Normalization (GN)
arXiv Detail & Related papers (2023-03-12T01:12:43Z) - FLAGS Framework for Comparative Analysis of Federated Learning
Algorithms [0.0]
This work consolidates the Federated Learning landscape and offers an objective analysis of the major FL algorithms.
To enable a uniform assessment, a multi-FL framework named FLAGS: Federated Learning AlGorithms Simulation has been developed.
Our experiments indicate that fully decentralized FL algorithms achieve comparable accuracy under multiple operating conditions.
arXiv Detail & Related papers (2022-12-14T12:08:30Z) - Analysis of Error Feedback in Federated Non-Convex Optimization with
Biased Compression [37.6593006747285]
In learning server (FL) systems, the communication cost between the clients and the central bottleneck can be high.
In this paper, we propose a technique to remedy the downsides of biased compression.
Under partial participation, we develop an extra slow-down factor due to a so-called stale error accumulation'' effect.
arXiv Detail & Related papers (2022-11-25T18:49:53Z) - UniFed: All-In-One Federated Learning Platform to Unify Open-Source
Frameworks [53.20176108643942]
We present UniFed, the first unified platform for standardizing open-source Federated Learning (FL) frameworks.
UniFed streamlines the end-to-end workflow for distributed experimentation and deployment, encompassing 11 popular open-source FL frameworks.
We evaluate and compare 11 popular FL frameworks from the perspectives of functionality, privacy protection, and performance.
arXiv Detail & Related papers (2022-07-21T05:03:04Z) - Towards Verifiable Federated Learning [15.758657927386263]
Federated learning (FL) is an emerging paradigm of collaborative machine learning that preserves user privacy while building powerful models.
Due to the nature of open participation by self-interested entities, FL needs to guard against potential misbehaviours by legitimate FL participants.
Verifiable federated learning has become an emerging topic of research that has attracted significant interest from the academia and the industry alike.
arXiv Detail & Related papers (2022-02-15T09:52:25Z) - FedComm: Federated Learning as a Medium for Covert Communication [56.376997104843355]
Federated Learning (FL) is a solution to mitigate the privacy implications related to the adoption of deep learning.
This paper thoroughly investigates the communication capabilities of an FL scheme.
We introduce FedComm, a novel multi-system covert-communication technique.
arXiv Detail & Related papers (2022-01-21T17:05:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.