Free-Rider Games for Federated Learning with Selfish Clients in NextG
Wireless Networks
- URL: http://arxiv.org/abs/2212.11194v1
- Date: Wed, 21 Dec 2022 17:10:55 GMT
- Title: Free-Rider Games for Federated Learning with Selfish Clients in NextG
Wireless Networks
- Authors: Yalin E. Sagduyu
- Abstract summary: This paper presents a game theoretic framework for participation and free-riding in federated learning (FL)
FL is used by clients to support spectrum sensing for NextG communications.
Free-riding behavior may potentially decrease the global accuracy due to lack of contribution to global model learning.
- Score: 1.1726528038065764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a game theoretic framework for participation and
free-riding in federated learning (FL), and determines the Nash equilibrium
strategies when FL is executed over wireless links. To support spectrum sensing
for NextG communications, FL is used by clients, namely spectrum sensors with
limited training datasets and computation resources, to train a wireless signal
classifier while preserving privacy. In FL, a client may be free-riding, i.e.,
it does not participate in FL model updates, if the computation and
transmission cost for FL participation is high, and receives the global model
(learned by other clients) without incurring a cost. However, the free-riding
behavior may potentially decrease the global accuracy due to lack of
contribution to global model learning. This tradeoff leads to a non-cooperative
game where each client aims to individually maximize its utility as the
difference between the global model accuracy and the cost of FL participation.
The Nash equilibrium strategies are derived for free-riding probabilities such
that no client can unilaterally increase its utility given the strategies of
its opponents remain the same. The free-riding probability increases with the
FL participation cost and the number of clients, and a significant optimality
gap exists in Nash equilibrium with respect to the joint optimization for all
clients. The optimality gap increases with the number of clients and the
maximum gap is evaluated as a function of the cost. These results quantify the
impact of free-riding on the resilience of FL in NextG networks and indicate
operational modes for FL participation.
Related papers
- Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - Federated Learning as a Network Effects Game [32.264180198812745]
Federated Learning (FL) aims to foster collaboration among a population of clients to improve the accuracy of machine learning without directly sharing local data.
In practice, clients may not benefit from joining in FL, especially in light of potential costs related to issues such as privacy and computation.
We are the first to model clients' behaviors in FL as a network effects game, where each client's benefit depends on other clients who also join the network.
arXiv Detail & Related papers (2023-02-16T19:10:12Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - Enabling Long-Term Cooperation in Cross-Silo Federated Learning: A
Repeated Game Perspective [16.91343945299973]
Cross-silo federated learning (FL) is a distributed learning approach where clients train a global model cooperatively while keeping their local data private.
We model the long-term selfish participation behaviors of clients as an infinitely repeated game.
We derive a cooperative strategy for clients which minimizes the number of free riders while increasing the amount of local data for model training.
arXiv Detail & Related papers (2021-06-22T14:27:30Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z) - Convergence Time Optimization for Federated Learning over Wireless
Networks [160.82696473996566]
A wireless network is considered in which wireless users transmit their local FL models (trained using their locally collected data) to a base station (BS)
The BS, acting as a central controller, generates a global FL model using the received local FL models and broadcasts it back to all users.
Due to the limited number of resource blocks (RBs) in a wireless network, only a subset of users can be selected to transmit their local FL model parameters to the BS.
Since each user has unique training data samples, the BS prefers to include all local user FL models to generate a converged global FL model.
arXiv Detail & Related papers (2020-01-22T01:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.