Price of Stability in Quality-Aware Federated Learning
- URL: http://arxiv.org/abs/2310.08790v1
- Date: Fri, 13 Oct 2023 00:25:21 GMT
- Title: Price of Stability in Quality-Aware Federated Learning
- Authors: Yizhou Yan, Xinyu Tang, Chao Huang, Ming Tang
- Abstract summary: Federated Learning (FL) is a distributed machine learning scheme that enables clients to train a shared global model without exchanging local data.
We model the clients' interactions as a novel label denoising game and characterize its equilibrium.
We prove that the equilibrium outcome always leads to a lower global model accuracy than the socially optimal solution.
- Score: 11.59995920901346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a distributed machine learning scheme that enables
clients to train a shared global model without exchanging local data. The
presence of label noise can severely degrade the FL performance, and some
existing studies have focused on algorithm design for label denoising. However,
they ignored the important issue that clients may not apply costly label
denoising strategies due to them being self-interested and having heterogeneous
valuations on the FL performance. To fill this gap, we model the clients'
interactions as a novel label denoising game and characterize its equilibrium.
We also analyze the price of stability, which quantifies the difference in the
system performance (e.g., global model accuracy, social welfare) between the
equilibrium outcome and the socially optimal solution. We prove that the
equilibrium outcome always leads to a lower global model accuracy than the
socially optimal solution does. We further design an efficient algorithm to
compute the socially optimal solution. Numerical experiments on MNIST dataset
show that the price of stability increases as the clients' data become noisier,
calling for an effective incentive mechanism.
Related papers
- Addressing Data Quality Decompensation in Federated Learning via Dynamic Client Selection [7.603415982653868]
Shapley-Bid Reputation Optimized Federated Learning (SBRO-FL) is a unified framework integrating dynamic bidding, reputation modeling, and cost-aware selection.<n>A reputation system, inspired by prospect theory, captures historical performance while penalizing inconsistency.<n>Experiments on FashionMNIST, EMNIST, CIFAR-10, and SVHN datasets show that SBRO-FL improves accuracy, convergence speed, and robustness, even in adversarial and low-bid interference scenarios.
arXiv Detail & Related papers (2025-05-27T14:06:51Z) - Robust Federated Learning with Confidence-Weighted Filtering and GAN-Based Completion under Noisy and Incomplete Data [0.0]
Federated learning (FL) presents an effective solution for collaborative model training while maintaining data privacy across decentralized client datasets.<n>This study proposes a federated learning methodology that systematically addresses data quality issues, including noise, class imbalance, and missing labels.<n>Our results indicate that this method effectively mitigates common data quality challenges, providing a robust, scalable, and privacy compliant solution.
arXiv Detail & Related papers (2025-05-14T18:49:18Z) - HFedCKD: Toward Robust Heterogeneous Federated Learning via Data-free Knowledge Distillation and Two-way Contrast [10.652998357266934]
We propose a system heterogeneous federation method based on data-free knowledge distillation and two-way contrast (HFedCKD)
HFedCKD effectively alleviates the knowledge offset caused by a low participation rate under data-free knowledge distillation and improves the performance and stability of the model.
We conduct extensive experiments on image and IoT datasets to comprehensively evaluate and verify the generalization and robustness of the proposed HFedCKD framework.
arXiv Detail & Related papers (2025-03-09T08:32:57Z) - Adaptive Client Selection in Federated Learning: A Network Anomaly Detection Use Case [0.30723404270319693]
This paper introduces a client selection framework for Federated Learning (FL) that incorporates differential privacy and fault tolerance.
Results demonstrate up to a 7% improvement in accuracy and a 25% reduction in training time compared to the FedL2P approach.
arXiv Detail & Related papers (2025-01-25T02:50:46Z) - Feasible Learning [78.6167929413604]
We introduce Feasible Learning (FL), a sample-centric learning paradigm where models are trained by solving a feasibility problem that bounds the loss for each training sample.
Our empirical analysis, spanning image classification, age regression, and preference optimization in large language models, demonstrates that models trained via FL can learn from data while displaying improved tail behavior compared to ERM, with only a marginal impact on average performance.
arXiv Detail & Related papers (2025-01-24T20:39:38Z) - Incentive-Compatible Federated Learning with Stackelberg Game Modeling [11.863770989724959]
We introduce FLamma, a novel Federated Learning framework based on adaptive gamma-based Stackelberg game.
Our approach allows the server to act as the leader, dynamically adjusting a decay factor while clients, acting as followers, optimally select their number of local epochs to maximize their utility.
Over time, the server incrementally balances client influence, initially rewarding higher-contributing clients and gradually leveling their impact, driving the system toward a Stackelberg Equilibrium.
arXiv Detail & Related papers (2025-01-05T21:04:41Z) - IMFL-AIGC: Incentive Mechanism Design for Federated Learning Empowered by Artificial Intelligence Generated Content [15.620004060097155]
Federated learning (FL) has emerged as a promising paradigm that enables clients to collaboratively train a shared global model without uploading their local data.
We propose a data quality-aware incentive mechanism to encourage clients' participation.
Our proposed mechanism exhibits highest training accuracy and reduces up to 53.34% of the server's cost with real-world datasets.
arXiv Detail & Related papers (2024-06-12T07:47:22Z) - FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - Federated Learning under Heterogeneous and Correlated Client
Availability [10.05687757555923]
This paper presents the first convergence analysis for a FedAvg-like FL algorithm under heterogeneous and correlated client availability.
We propose CA-Fed, a new FL algorithm that tries to balance the conflicting goals of maximizing convergence speed and minimizing model bias.
Our experimental results show that CA-Fed achieves higher time-average accuracy and a lower standard deviation than state-of-the-art AdaFed and F3AST.
arXiv Detail & Related papers (2023-01-11T18:38:48Z) - Quantifying the Impact of Label Noise on Federated Learning [7.531486350989069]
Federated Learning (FL) is a distributed machine learning paradigm where clients collaboratively train a model using their local (human-generated) datasets.
This paper provides a quantitative study on the impact of label noise on FL.
Our empirical results show that the global model accuracy linearly decreases as the noise level increases.
arXiv Detail & Related papers (2022-11-15T00:40:55Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Labeling Chaos to Learning Harmony: Federated Learning with Noisy Labels [3.4620497416430456]
Federated Learning (FL) is a distributed machine learning paradigm that enables learning models from decentralized private datasets.
We propose FedLN, a framework to deal with label noise across different FL training stages.
Our evaluation on various publicly available vision and audio datasets demonstrate a 22% improvement on average compared to other existing methods for a label noise level of 60%.
arXiv Detail & Related papers (2022-08-19T14:47:40Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.