Flow-FL: Data-Driven Federated Learning for Spatio-Temporal Predictions
in Multi-Robot Systems
- URL: http://arxiv.org/abs/2010.08595v1
- Date: Fri, 16 Oct 2020 19:09:57 GMT
- Title: Flow-FL: Data-Driven Federated Learning for Spatio-Temporal Predictions
in Multi-Robot Systems
- Authors: Nathalie Majcherczyk, Nishan Srishankar and Carlo Pinciroli
- Abstract summary: We show how the Federated Learning framework enables learning collectively from distributed data in connected robot teams.
This framework typically works with clients collecting data locally, updating neural network weights of their model, and sending updates to a server for aggregation into a global model.
- Score: 16.887485428725043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we show how the Federated Learning (FL) framework enables
learning collectively from distributed data in connected robot teams. This
framework typically works with clients collecting data locally, updating neural
network weights of their model, and sending updates to a server for aggregation
into a global model. We explore the design space of FL by comparing two
variants of this concept. The first variant follows the traditional FL approach
in which a server aggregates the local models. In the second variant, that we
call Flow-FL, the aggregation process is serverless thanks to the use of a
gossip-based shared data structure. In both variants, we use a data-driven
mechanism to synchronize the learning process in which robots contribute model
updates when they collect sufficient data. We validate our approach with an
agent trajectory forecasting problem in a multi-agent setting. Using a
centralized implementation as a baseline, we study the effects of staggered
online data collection, and variations in data flow, number of participating
robots, and time delays introduced by the decentralization of the framework in
a multi-robot setting.
Related papers
- Tackling Data Heterogeneity in Federated Time Series Forecasting [61.021413959988216]
Time series forecasting plays a critical role in various real-world applications, including energy consumption prediction, disease transmission monitoring, and weather forecasting.
Most existing methods rely on a centralized training paradigm, where large amounts of data are collected from distributed devices to a central cloud server.
We propose a novel framework, Fed-TREND, to address data heterogeneity by generating informative synthetic data as auxiliary knowledge carriers.
arXiv Detail & Related papers (2024-11-24T04:56:45Z) - Federated Learning with MMD-based Early Stopping for Adaptive GNSS Interference Classification [4.674584508653125]
Federated learning (FL) enables multiple devices to collaboratively train a global model while maintaining data on local servers.
We propose an FL approach using few-shot learning and aggregation of the model weights on a global server.
An exemplary application of FL is orchestrating machine learning models along highways for interference classification based on snapshots from global navigation satellite system (GNSS) receivers.
arXiv Detail & Related papers (2024-10-21T06:43:04Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FedSiam-DA: Dual-aggregated Federated Learning via Siamese Network under
Non-IID Data [21.95009868875851]
Federated learning can address data island, it remains challenging to train with data heterogeneous in a real application.
We propose FedSiam-DA, a novel dual-aggregated contrastive federated learning approach.
arXiv Detail & Related papers (2022-11-17T09:05:25Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Ensemble Distillation for Robust Model Fusion in Federated Learning [72.61259487233214]
Federated Learning (FL) is a machine learning setting where many devices collaboratively train a machine learning model.
In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side.
We propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients.
arXiv Detail & Related papers (2020-06-12T14:49:47Z) - Information-Theoretic Bounds on the Generalization Error and Privacy
Leakage in Federated Learning [96.38757904624208]
Machine learning algorithms on mobile networks can be characterized into three different categories.
The main objective of this work is to provide an information-theoretic framework for all of the aforementioned learning paradigms.
arXiv Detail & Related papers (2020-05-05T21:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.