FLAME: Federated Learning Across Multi-device Environments
- URL: http://arxiv.org/abs/2202.08922v1
- Date: Thu, 17 Feb 2022 22:23:56 GMT
- Title: FLAME: Federated Learning Across Multi-device Environments
- Authors: Hyunsung Cho, Akhil Mathur, Fahim Kawsar
- Abstract summary: Federated Learning (FL) enables distributed training of machine learning models while keeping personal data on user devices private.
We propose FLAME, a user-centered FL training approach to counter statistical and system heterogeneity in multi-device environments.
Our experiment results show that FLAME outperforms various baselines by 4.8-33.8% higher F-1 score, 1.02-2.86x greater energy efficiency, and up to 2.02x speedup in convergence.
- Score: 9.810211000961647
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables distributed training of machine learning
models while keeping personal data on user devices private. While we witness
increasing applications of FL in the area of mobile sensing, such as
human-activity recognition, FL has not been studied in the context of a
multi-device environment (MDE), wherein each user owns multiple data-producing
devices. With the proliferation of mobile and wearable devices, MDEs are
increasingly becoming popular in ubicomp settings, therefore necessitating the
study of FL in them. FL in MDEs is characterized by high non-IID-ness across
clients, complicated by the presence of both user and device heterogeneities.
Further, ensuring efficient utilization of system resources on FL clients in a
MDE remains an important challenge. In this paper, we propose FLAME, a
user-centered FL training approach to counter statistical and system
heterogeneity in MDEs, and bring consistency in inference performance across
devices. FLAME features (i) user-centered FL training utilizing the time
alignment across devices from the same user; (ii) accuracy- and
efficiency-aware device selection; and (iii) model personalization to devices.
We also present an FL evaluation testbed with realistic energy drain and
network bandwidth profiles, and a novel class-based data partitioning scheme to
extend existing HAR datasets to a federated setup. Our experiment results on
three multi-device HAR datasets show that FLAME outperforms various baselines
by 4.8-33.8% higher F-1 score, 1.02-2.86x greater energy efficiency, and up to
2.02x speedup in convergence to target accuracy through fair distribution of
the FL workload.
Related papers
- Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - FS-Real: Towards Real-World Cross-Device Federated Learning [60.91678132132229]
Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data.
There is still a considerable gap between the flourishing FL research and real-world scenarios, mainly caused by the characteristics of heterogeneous devices and its scales.
We propose an efficient and scalable prototyping system for real-world cross-device FL, FS-Real.
arXiv Detail & Related papers (2023-03-23T15:37:17Z) - Enhancing Efficiency in Multidevice Federated Learning through Data Selection [11.67484476827617]
Federated learning (FL) in multidevice environments creates new opportunities to learn from a vast and diverse amount of private data.
In this paper, we develop an FL framework to incorporate on-device data selection on such constrained devices.
We show that our framework achieves 19% higher accuracy and 58% lower latency; compared to the baseline FL without our implemented strategies.
arXiv Detail & Related papers (2022-11-08T11:39:17Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - A Survey on Participant Selection for Federated Learning in Mobile
Networks [47.88372677863646]
Federated Learning (FL) is an efficient distributed machine learning paradigm that employs private datasets in a privacy-preserving manner.
Due to limited communication bandwidth and unstable availability of such devices in a mobile network, only a fraction of end devices can be selected in each round.
arXiv Detail & Related papers (2022-07-08T04:22:48Z) - Learnings from Federated Learning in the Real world [19.149989896466852]
Federated Learning (FL) applied to real world data may suffer from several idiosyncrasies.
Data across devices could be distributed such that there are some "heavy devices" with large amounts of data while there are many "light users" with only a handful of data points.
We evaluate the impact of such idiosyncrasies on Natural Language Understanding (NLU) models trained using FL.
arXiv Detail & Related papers (2022-02-08T15:21:31Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z) - FLeet: Online Federated Learning via Staleness Awareness and Performance
Prediction [9.408271687085476]
This paper presents FLeet, the first Online Federated Learning system.
Online FL combines the privacy of Standard FL with the precision of online learning.
I-Prof is a new lightweight profiler that predicts and controls the impact of learning tasks on mobile devices.
AdaSGD is a new adaptive learning algorithm that is resilient to delayed updates.
arXiv Detail & Related papers (2020-06-12T15:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.