FedAR: Activity and Resource-Aware Federated Learning Model for
Distributed Mobile Robots
- URL: http://arxiv.org/abs/2101.03705v1
- Date: Mon, 11 Jan 2021 05:27:37 GMT
- Title: FedAR: Activity and Resource-Aware Federated Learning Model for
Distributed Mobile Robots
- Authors: Ahmed Imteaj and M. Hadi Amini
- Abstract summary: A recently proposed Machine Learning algorithm called Federated Learning (FL) paves the path towards preserving data privacy.
This paper proposes an FL model by monitoring client activities and leveraging available local computing resources.
We consider such mobile robots as FL clients to understand their resource-constrained behavior in a real-world setting.
- Score: 1.332560004325655
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Smartphones, autonomous vehicles, and the Internet-of-things (IoT) devices
are considered the primary data source for a distributed network. Due to a
revolutionary breakthrough in internet availability and continuous improvement
of the IoT devices capabilities, it is desirable to store data locally and
perform computation at the edge, as opposed to share all local information with
a centralized computation agent. A recently proposed Machine Learning (ML)
algorithm called Federated Learning (FL) paves the path towards preserving data
privacy, performing distributed learning, and reducing communication overhead
in large-scale machine learning (ML) problems. This paper proposes an FL model
by monitoring client activities and leveraging available local computing
resources, particularly for resource-constrained IoT devices (e.g., mobile
robots), to accelerate the learning process. We assign a trust score to each FL
client, which is updated based on the client's activities. We consider a
distributed mobile robot as an FL client with resource limitations either in
memory, bandwidth, processor, or battery life. We consider such mobile robots
as FL clients to understand their resource-constrained behavior in a real-world
setting. We consider an FL client to be untrustworthy if the client infuses
incorrect models or repeatedly gives slow responses during the FL process.
After disregarding the ineffective and unreliable client, we perform local
training on the selected FL clients. To further reduce the straggler issue, we
enable an asynchronous FL mechanism by performing aggregation on the FL server
without waiting for a long period to receive a particular client's response.
Related papers
- FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - DynamicFL: Balancing Communication Dynamics and Client Manipulation for
Federated Learning [6.9138560535971605]
Federated Learning (FL) aims to train a global model by exploiting the decentralized data across millions of edge devices.
Given the geo-distributed edge devices with highly dynamic networks in the wild, aggregating all the model updates from those participating devices will result in inevitable long-tail delays in FL.
We propose a novel FL framework, DynamicFL, by considering the communication dynamics and data quality across massive edge devices with a specially designed client manipulation strategy.
arXiv Detail & Related papers (2023-07-16T19:09:31Z) - FLEdge: Benchmarking Federated Machine Learning Applications in Edge Computing Systems [61.335229621081346]
Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge.
In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities.
arXiv Detail & Related papers (2023-06-08T13:11:20Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - FedLE: Federated Learning Client Selection with Lifespan Extension for
Edge IoT Networks [34.63384007690422]
Federated learning (FL) is a distributed and privacy-preserving learning framework for predictive modeling with massive data generated at the edge by Internet of Things (IoT) devices.
One major challenge preventing the wide adoption of FL in IoT is the pervasive power supply constraints of IoT devices.
We propose FedLE, an energy-efficient client selection framework that enables extension of edge IoT networks.
arXiv Detail & Related papers (2023-02-14T19:41:24Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.