Wireless Ad Hoc Federated Learning: A Fully Distributed Cooperative
Machine Learning
- URL: http://arxiv.org/abs/2205.11779v1
- Date: Tue, 24 May 2022 04:37:11 GMT
- Title: Wireless Ad Hoc Federated Learning: A Fully Distributed Cooperative
Machine Learning
- Authors: Hideya Ochiai, Yuwei Sun, Qingzhe Jin, Nattanon Wongwiwatchai, Hiroshi
Esaki
- Abstract summary: Federated learning has allowed training of a global model by aggregating local models trained on local nodes.
We propose a wireless ad hoc federated learning (WAFL) -- a fully distributed cooperative machine learning system.
WAFL allowed the convergence of model parameters among the nodes toward generalization, even with opportunistic node contact scenarios.
- Score: 2.724649101295584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning has allowed training of a global model by aggregating
local models trained on local nodes. However, it still takes client-server
model, which can be further distributed, fully decentralized, or even partially
connected, or totally opportunistic. In this paper, we propose a wireless ad
hoc federated learning (WAFL) -- a fully distributed cooperative machine
learning organized by the nodes physically nearby. Here, each node has a
wireless interface and can communicate with each other when they are within the
radio range. The nodes are expected to move with people, vehicles, or robots,
producing opportunistic contacts with each other. In WAFL, each node trains a
model individually with the local data it has. When a node encounter with
others, they exchange their trained models, and generate new aggregated models,
which are expected to be more general compared to the locally trained models on
Non-IID data. For evaluation, we have prepared four static communication
networks and two types of dynamic and opportunistic communication networks
based on random waypoint mobility and community-structured environment, and
then studied the training process of a fully connected neural network with 90%
Non-IID MNIST dataset. The evaluation results indicate that WAFL allowed the
convergence of model parameters among the nodes toward generalization, even
with opportunistic node contact scenarios -- whereas in self-training (or
lonely training) case, they have diverged. This WAFL's model generalization
contributed to achieving higher accuracy 94.7-96.2% to the testing IID dataset
compared to the self-training case 84.7%.
Related papers
- Harnessing Increased Client Participation with Cohort-Parallel Federated Learning [2.9593087583214173]
Federated Learning (FL) is a machine learning approach where nodes collaboratively train a global model.
We introduce Cohort-Parallel Federated Learning (CPFL), a novel learning approach where each cohort independently trains a global model.
CPFL with four cohorts, non-IID data distribution, and CIFAR-10 yields a 1.9$times$ reduction in train time and a 1.3$times$ reduction in resource usage.
arXiv Detail & Related papers (2024-05-24T15:34:09Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Homogeneous Learning: Self-Attention Decentralized Deep Learning [0.6091702876917281]
We propose a decentralized learning model called Homogeneous Learning (HL) for tackling non-IID data with a self-attention mechanism.
HL can produce a better performance compared with standalone learning and greatly reduce both the total training rounds by 50.8% and the communication cost by 74.6%.
arXiv Detail & Related papers (2021-10-11T14:05:29Z) - Federated Learning from Small Datasets [48.879172201462445]
Federated learning allows multiple parties to collaboratively train a joint model without sharing local data.
We propose a novel approach that intertwines model aggregations with permutations of local models.
The permutations expose each local model to a daisy chain of local datasets resulting in more efficient training in data-sparse domains.
arXiv Detail & Related papers (2021-10-07T13:49:23Z) - Decentralized Federated Learning via Mutual Knowledge Transfer [37.5341683644709]
Decentralized federated learning (DFL) is a problem in the Internet of things (IoT) systems.
We propose a mutual knowledge transfer (Def-KT) algorithm where local clients fuse models by transferring their learnt knowledge to each other.
Our experiments on the MNIST, Fashion-MNIST, and CIFAR10 datasets reveal datasets that the proposed Def-KT algorithm significantly outperforms the baseline DFL methods.
arXiv Detail & Related papers (2020-12-24T01:43:53Z) - Probabilistic Federated Learning of Neural Networks Incorporated with
Global Posterior Information [4.067903810030317]
In federated learning, models trained on local clients are distilled into a global model.
We propose a new method which extends the Probabilistic Federated Neural Matching.
Our new method outperforms popular state-of-the-art federated learning methods in both single communication round and additional communication rounds situation.
arXiv Detail & Related papers (2020-12-06T03:54:58Z) - Consensus Driven Learning [0.0]
We propose a new method of distributed, decentralized learning that allows a network of nodes to coordinate their training using asynchronous updates over an unreliable network.
This is achieved by taking inspiration from Distributed Averaging Consensus algorithms to coordinate the various nodes.
We show that our coordination method allows models to be learned on highly biased datasets, and in the presence of intermittent communication failure.
arXiv Detail & Related papers (2020-05-20T18:24:19Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.