Energy Minimization for Participatory Federated Learning in IoT Analyzed via Game Theory
- URL: http://arxiv.org/abs/2503.21722v1
- Date: Thu, 27 Mar 2025 17:35:38 GMT
- Title: Energy Minimization for Participatory Federated Learning in IoT Analyzed via Game Theory
- Authors: Alessandro Buratto, Elia Guerra, Marco Miozzo, Paolo Dini, Leonardo Badia,
- Abstract summary: We investigate the simultaneous implementation of participatory sensing and federated learning.<n>A global objective of energy minimization is combined with the individual node's optimization of local expenditure.<n>We present extensive evaluations of this technique, based on both a theoretical framework and experiments in a simulated network scenario.
- Score: 43.70799346027617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Internet of Things requires intelligent decision making in many scenarios. To this end, resources available at the individual nodes for sensing or computing, or both, can be leveraged. This results in approaches known as participatory sensing and federated learning, respectively. We investigate the simultaneous implementation of both, through a distributed approach based on empowering local nodes with game theoretic decision making. A global objective of energy minimization is combined with the individual node's optimization of local expenditure for sensing and transmitting data over multiple learning rounds. We present extensive evaluations of this technique, based on both a theoretical framework and experiments in a simulated network scenario with real data. Such a distributed approach can reach a desired level of accuracy for federated learning without a centralized supervision of the data collector. However, depending on the weight attributed to the local costs of the single node, it may also result in a significantly high Price of Anarchy (from 1.28 onwards). Thus, we argue for the need of incentive mechanisms, possibly based on Age of Information of the single nodes.
Related papers
- Robust Decentralized Learning with Local Updates and Gradient Tracking [16.46727164965154]
We consider decentralized learning as a network of communicating clients or nodes.<n>We propose a decentralized minimax optimization method that employs two important data: local updates and gradient tracking.
arXiv Detail & Related papers (2024-05-02T03:03:34Z) - Impact of network topology on the performance of Decentralized Federated
Learning [4.618221836001186]
Decentralized machine learning is gaining momentum, addressing infrastructure challenges and privacy concerns.
This study investigates the interplay between network structure and learning performance using three network topologies and six data distribution methods.
We highlight the challenges in transferring knowledge from peripheral to central nodes, attributed to a dilution effect during model aggregation.
arXiv Detail & Related papers (2024-02-28T11:13:53Z) - Sparse Decentralized Federated Learning [35.32297764027417]
Decentralized Federated Learning (DFL) enables collaborative model training without a central server but faces challenges in efficiency, stability, and trustworthiness.<n>We introduce a sparsity constraint on the shared model, leading to Sparse DFL (SDFL), and propose a novel algorithm, CEPS.<n> Numerical experiments validate the effectiveness of the proposed algorithm in improving communication and efficiency while maintaining a high level of trustworthiness.
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Tackling Computational Heterogeneity in FL: A Few Theoretical Insights [68.8204255655161]
We introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneous data.
Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
arXiv Detail & Related papers (2023-07-12T16:28:21Z) - Decentralized Multi-Task Stochastic Optimization With Compressed
Communications [22.31884634659446]
The paper develops algorithms and obtains performance bounds for two different models of local information availability at the nodes.
We show that deviation from the global minimum value and violations of the constraints are upper-bounded by $mathcalO(T-frac12)$ and $mathcalO(T-frac14)$.
arXiv Detail & Related papers (2021-12-23T05:54:42Z) - Centralizing State-Values in Dueling Networks for Multi-Robot
Reinforcement Learning Mapless Navigation [87.85646257351212]
We study the problem of multi-robot mapless navigation in the popular Training and Decentralized Execution (CTDE) paradigm.
This problem is challenging when each robot considers its path without explicitly sharing observations with other robots.
We propose a novel architecture for CTDE that uses a centralized state-value network to compute a joint state-value.
arXiv Detail & Related papers (2021-12-16T16:47:00Z) - Two-Bit Aggregation for Communication Efficient and Differentially
Private Federated Learning [79.66767935077925]
In federated learning (FL), a machine learning model is trained on multiple nodes in a decentralized manner, while keeping the data local and not shared with other nodes.
The information sent from the nodes to the server may reveal some details about each node's local data, thus raising privacy concerns.
A novel two-bit aggregation algorithm is proposed with guaranteed differential privacy and reduced uplink communication overhead.
arXiv Detail & Related papers (2021-10-06T19:03:58Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Fast-Convergent Federated Learning with Adaptive Weighting [6.040848035935873]
Federated learning (FL) enables resource-constrained edge nodes to collaboratively learn a global model under the orchestration of a central server.
We propose Federated Adaptive Weighting (FedAdp) algorithm that aims to accelerate model convergence under the presence of nodes with non-IID dataset.
We show that FL training with FedAdp can reduce the number of communication rounds by up to 54.1% on MNIST dataset and up to 45.4% on FashionMNIST dataset.
arXiv Detail & Related papers (2020-12-01T17:35:05Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.