IP-FL: Incentivized and Personalized Federated Learning
- URL: http://arxiv.org/abs/2304.07514v4
- Date: Fri, 11 Oct 2024 17:44:56 GMT
- Title: IP-FL: Incentivized and Personalized Federated Learning
- Authors: Ahmad Faraz Khan, Xinran Wang, Qi Le, Zain ul Abdeen, Azal Ahmad Khan, Haider Ali, Ming Jin, Jie Ding, Ali R. Butt, Ali Anwar,
- Abstract summary: We first propose to treat incentivization and personalization as interrelated challenges and solve them with an incentive mechanism that fosters personalized learning.
Our approach enhances the personalized model appeal for self-aware clients with high-quality data leading to their active and consistent participation.
- Score: 13.13354915338396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing incentive solutions for traditional Federated Learning (FL) focus on individual contributions to a single global objective, neglecting the nuances of clustered personalization with multiple cluster-level models and the non-monetary incentives such as personalized model appeal for clients. In this paper, we first propose to treat incentivization and personalization as interrelated challenges and solve them with an incentive mechanism that fosters personalized learning. Additionally, current methods depend on an aggregator for client clustering, which is limited by a lack of access to clients' confidential information due to privacy constraints, leading to inaccurate clustering. To overcome this, we propose direct client involvement, allowing clients to indicate their cluster membership preferences based on data distribution and incentive-driven feedback. Our approach enhances the personalized model appeal for self-aware clients with high-quality data leading to their active and consistent participation. Our evaluation demonstrates significant improvements in test accuracy (8-45%), personalized model appeal (3-38%), and participation rates (31-100%) over existing FL models, including those addressing data heterogeneity and personalization.
Related papers
- Interaction-Aware Gaussian Weighting for Clustered Federated Learning [58.92159838586751]
Federated Learning (FL) emerged as a decentralized paradigm to train models while preserving privacy.
We propose a novel clustered FL method, FedGWC (Federated Gaussian Weighting Clustering), which groups clients based on their data distribution.
Our experiments on benchmark datasets show that FedGWC outperforms existing FL algorithms in cluster quality and classification accuracy.
arXiv Detail & Related papers (2025-02-05T16:33:36Z) - Personalized Federated Knowledge Graph Embedding with Client-Wise Relation Graph [49.66272783945571]
We propose Personalized Federated knowledge graph Embedding with client-wise relation graph.
PFedEG learns personalized supplementary knowledge for each client by amalgamating entity embedding from its neighboring clients.
We conduct extensive experiments on four benchmark datasets to evaluate our method against state-of-the-art models.
arXiv Detail & Related papers (2024-06-17T17:44:53Z) - Personalized Federated Learning with Attention-based Client Selection [57.71009302168411]
We propose FedACS, a new PFL algorithm with an Attention-based Client Selection mechanism.
FedACS integrates an attention mechanism to enhance collaboration among clients with similar data distributions.
Experiments on CIFAR10 and FMNIST validate FedACS's superiority.
arXiv Detail & Related papers (2023-12-23T03:31:46Z) - DCFL: Non-IID awareness Data Condensation aided Federated Learning [0.8158530638728501]
Federated learning is a decentralized learning paradigm wherein a central server trains a global model iteratively by utilizing clients who possess a certain amount of private datasets.
The challenge lies in the fact that the client side private data may not be identically and independently distributed.
We propose DCFL which divides clients into groups by using the Centered Kernel Alignment (CKA) method, then uses dataset condensation methods with non-IID awareness to complete clients.
arXiv Detail & Related papers (2023-12-21T13:04:24Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Personalized Privacy-Preserving Framework for Cross-Silo Federated
Learning [0.0]
Federated learning (FL) is a promising decentralized deep learning (DL) framework that enables DL-based approaches trained collaboratively across clients without sharing private data.
In this paper, we propose a novel framework, namely Personalized Privacy-Preserving Federated Learning (PPPFL)
Our proposed framework outperforms multiple FL baselines on different datasets, including MNIST, Fashion-MNIST, CIFAR-10, and CIFAR-100.
arXiv Detail & Related papers (2023-02-22T07:24:08Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - To Federate or Not To Federate: Incentivizing Client Participation in
Federated Learning [22.3101738137465]
Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model.
In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model.
arXiv Detail & Related papers (2022-05-30T04:03:31Z) - Self-Aware Personalized Federated Learning [32.97492968378679]
We develop a self-aware personalized federated learning (FL) method inspired by Bayesian hierarchical models.
Our method uses uncertainty-driven local training steps and aggregation rule instead of conventional local fine-tuning and sample size-based aggregation.
With experimental studies on synthetic data, Amazon Alexa audio data, and public datasets such as MNIST, FEMNIST, CIFAR10, and Sent140, we show that our proposed method can achieve significantly improved personalization performance.
arXiv Detail & Related papers (2022-04-17T19:02:25Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.