Federated Learning Playground
- URL: http://arxiv.org/abs/2602.19489v1
- Date: Mon, 23 Feb 2026 04:14:40 GMT
- Title: Federated Learning Playground
- Authors: Bryan Guanrong Shan, Alysa Ziying Tan, Han Yu,
- Abstract summary: We present Federated Learning Playground, an interactive browser-based platform that teaches core Federated Learning (FL) concepts.<n>Users can experiment with heterogeneous client data distributions and aggregation algorithms directly in the browser without coding or system setup.<n>The playground serves as an easy to use educational tool, lowering the entry barrier for newcomers to distributed AI.
- Score: 13.518363925108867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Federated Learning Playground, an interactive browser-based platform inspired by and extends TensorFlow Playground that teaches core Federated Learning (FL) concepts. Users can experiment with heterogeneous client data distributions, model hyperparameters, and aggregation algorithms directly in the browser without coding or system setup, and observe their effects on client and global models through real-time visualizations, gaining intuition for challenges such as non-IID data, local overfitting, and scalability. The playground serves as an easy to use educational tool, lowering the entry barrier for newcomers to distributed AI while also offering a sandbox for rapidly prototyping and comparing FL methods. By democratizing exploration of FL, it promotes broader understanding and adoption of this important paradigm.
Related papers
- Dynamic Participation in Federated Learning: Benchmarks and a Knowledge Pool Plugin [10.912739346462525]
Federated learning (FL) enables clients to collaboratively train a shared model in a distributed manner.<n>Most existing FL research assumes consistent client participation, overlooking the practical scenario of dynamic participation.<n>We present the first open-source framework explicitly designed for benchmarking FL models under dynamic client participation.
arXiv Detail & Related papers (2025-11-20T16:36:50Z) - Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models [7.810284483002312]
Federated prompt learning benefits federated learning with CLIP-like Vision-Language Model's (VLM's) robust representation learning ability through prompt learning.<n>Current federated prompt learning methods are habitually restricted to the traditional FL paradigm, where the participating clients are generally only allowed to download a single globally aggregated model from the server.<n>We propose Personalized Federated Mixture of Adaptive Prompts (pFedMoAP), a novel FL framework that personalizes the prompt learning process through the lens of Mixture of Experts (MoE)
arXiv Detail & Related papers (2024-10-14T03:05:12Z) - GAI-Enabled Explainable Personalized Federated Semi-Supervised Learning [29.931169585178818]
Federated learning (FL) is a commonly distributed algorithm for mobile users (MUs) training artificial intelligence (AI) models.
We propose an explainable personalized FL framework, called XPFL. Particularly, in local training, we utilize a generative AI (GAI) model to learn from large unlabeled data.
In global aggregation, we obtain the new local local model by fusing the local and global FL models in specific proportions.
Finally, simulation results validate the effectiveness of the proposed XPFL framework.
arXiv Detail & Related papers (2024-10-11T08:58:05Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Handling Data Heterogeneity via Architectural Design for Federated
Visual Recognition [16.50490537786593]
We study 19 visual recognition models from five different architectural families on four challenging FL datasets.
Our findings emphasize the importance of architectural design for computer vision tasks in practical scenarios.
arXiv Detail & Related papers (2023-10-23T17:59:16Z) - Vertical Federated Learning: A Structured Literature Review [0.0]
Federated learning (FL) has emerged as a promising distributed learning paradigm with an added advantage of data privacy.
In this paper, we present a structured literature review discussing the state-of-the-art approaches in VFL.
arXiv Detail & Related papers (2022-12-01T16:16:41Z) - Federated Learning with Server Learning: Enhancing Performance for
Non-IID Data [5.070289965695956]
Federated Learning (FL) has emerged as a means of distributed learning using local data stored at clients with a coordinating server.
Recent studies showed that FL can suffer from poor performance and slower convergence when training data at clients are not independent and identically distributed.
Here we consider a new complementary approach to mitigating this performance degradation by allowing the server to perform auxiliary learning from a small dataset.
arXiv Detail & Related papers (2022-10-06T00:27:16Z) - Vertical Semi-Federated Learning for Efficient Online Advertising [50.18284051956359]
Semi-VFL (Vertical Semi-Federated Learning) is proposed to achieve a practical industry application fashion for VFL.
We build an inference-efficient single-party student model applicable to the whole sample space.
New representation distillation methods are designed to extract cross-party feature correlations for both the overlapped and non-overlapped data.
arXiv Detail & Related papers (2022-09-30T17:59:27Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.