A study on performance limitations in Federated Learning
- URL: http://arxiv.org/abs/2501.03477v1
- Date: Tue, 07 Jan 2025 02:35:41 GMT
- Title: A study on performance limitations in Federated Learning
- Authors: Karthik Mohan,
- Abstract summary: This project focuses on the communication bottleneck and data Non IID-ness, and its effect on the performance of the models.
Google introduced Federated Learning in 2016.
This project will be focusing on the communication bottleneck and data Non IID-ness, and its effect on the performance of the models.
- Score: 0.05439020425819
- License:
- Abstract: Increasing privacy concerns and unrestricted access to data lead to the development of a novel machine learning paradigm called Federated Learning (FL). FL borrows many of the ideas from distributed machine learning, however, the challenges associated with federated learning makes it an interesting engineering problem since the models are trained on edge devices. It was introduced in 2016 by Google, and since then active research is being carried out in different areas within FL such as federated optimization algorithms, model and update compression, differential privacy, robustness, and attacks, federated GANs and privacy preserved personalization. There are many open challenges in the development of such federated machine learning systems and this project will be focusing on the communication bottleneck and data Non IID-ness, and its effect on the performance of the models. These issues are characterized on a baseline model, model performance is evaluated, and discussions are made to overcome these issues.
Related papers
- Ten Challenging Problems in Federated Foundation Models [55.343738234307544]
Federated Foundation Models (FedFMs) represent a distributed learning paradigm that fuses general competences of foundation models as well as privacy-preserving capabilities of federated learning.
This paper provides a comprehensive summary of the ten challenging problems inherent in FedFMs, encompassing foundational theory, utilization of private data, continual learning, unlearning, Non-IID and graph data, bidirectional knowledge transfer, incentive mechanism design, game mechanism design, model watermarking, and efficiency.
arXiv Detail & Related papers (2025-02-14T04:01:15Z) - Federated Learning driven Large Language Models for Swarm Intelligence: A Survey [2.769238399659845]
Federated learning (FL) offers a compelling framework for training large language models (LLMs)
We focus on machine unlearning, a crucial aspect for complying with privacy regulations like the Right to be Forgotten.
We explore various strategies that enable effective unlearning, such as perturbation techniques, model decomposition, and incremental learning.
arXiv Detail & Related papers (2024-06-14T08:40:58Z) - A Comprehensive Study on Model Initialization Techniques Ensuring
Efficient Federated Learning [0.0]
Federated learning(FL) has emerged as a promising paradigm for training machine learning models in a distributed and privacy-preserving manner.
The choice of methods used for models plays a crucial role in the performance, convergence speed, communication efficiency, privacy guarantees of federated learning systems.
Our research meticulously compares, categorizes, and delineates the merits and demerits of each technique, examining their applicability across diverse FL scenarios.
arXiv Detail & Related papers (2023-10-31T23:26:58Z) - FRAMU: Attention-based Machine Unlearning using Federated Reinforcement
Learning [16.86560475992975]
We introduce Attention-based Machine Unlearning using Federated Reinforcement Learning (FRAMU)
FRAMU incorporates adaptive learning mechanisms, privacy preservation techniques, and optimization strategies.
Our experiments, conducted on both single-modality and multi-modality datasets, revealed that FRAMU significantly outperformed baseline models.
arXiv Detail & Related papers (2023-09-19T03:13:17Z) - Deep Equilibrium Models Meet Federated Learning [71.57324258813675]
This study explores the problem of Federated Learning (FL) by utilizing the Deep Equilibrium (DEQ) models instead of conventional deep learning networks.
We claim that incorporating DEQ models into the federated learning framework naturally addresses several open problems in FL.
To the best of our knowledge, this study is the first to establish a connection between DEQ models and federated learning.
arXiv Detail & Related papers (2023-05-29T22:51:40Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Applied Federated Learning: Architectural Design for Robust and
Efficient Learning in Privacy Aware Settings [0.8454446648908585]
The classical machine learning paradigm requires the aggregation of user data in a central location.
Centralization of data poses risks, including a heightened risk of internal and external security incidents.
Federated learning with differential privacy is designed to avoid the server-side centralization pitfall.
arXiv Detail & Related papers (2022-06-02T00:30:04Z) - Rethinking Architecture Design for Tackling Data Heterogeneity in
Federated Learning [53.73083199055093]
We show that attention-based architectures (e.g., Transformers) are fairly robust to distribution shifts.
Our experiments show that replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices.
arXiv Detail & Related papers (2021-06-10T21:04:18Z) - Concept drift detection and adaptation for federated and continual
learning [55.41644538483948]
Smart devices can collect vast amounts of data from their environment.
This data is suitable for training machine learning models, which can significantly improve their behavior.
In this work, we present a new method, called Concept-Drift-Aware Federated Averaging.
arXiv Detail & Related papers (2021-05-27T17:01:58Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - Federated Edge Learning : Design Issues and Challenges [1.916348196696894]
Federated Learning (FL) is a distributed machine learning technique, where each device contributes to the learning model by independently computing the gradient based on its local training data.
implementing FL at the network edge is challenging due to system and data heterogeneity and resources constraints.
This article proposes a general framework for the data-aware scheduling as a guideline for future research directions.
arXiv Detail & Related papers (2020-08-31T19:56:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.