A New Distributed Method for Training Generative Adversarial Networks
- URL: http://arxiv.org/abs/2107.08681v1
- Date: Mon, 19 Jul 2021 08:38:10 GMT
- Title: A New Distributed Method for Training Generative Adversarial Networks
- Authors: Jinke Ren, Chonghe Liu, Guanding Yu, Dongning Guo
- Abstract summary: This paper proposes a new framework for training GANs in a distributed fashion.
Each device computes a local discriminator using local data; a single server aggregates their results and computes a global GAN.
Numerical results obtained using three popular datasets demonstrate that the proposed framework can outperform a state-of-the-art framework in terms of convergence speed.
- Score: 22.339672721348382
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) are emerging machine learning models
for generating synthesized data similar to real data by jointly training a
generator and a discriminator. In many applications, data and computational
resources are distributed over many devices, so centralized computation with
all data in one location is infeasible due to privacy and/or communication
constraints. This paper proposes a new framework for training GANs in a
distributed fashion: Each device computes a local discriminator using local
data; a single server aggregates their results and computes a global GAN.
Specifically, in each iteration, the server sends the global GAN to the
devices, which then update their local discriminators; the devices send their
results to the server, which then computes their average as the global
discriminator and updates the global generator accordingly. Two different
update schedules are designed with different levels of parallelism between the
devices and the server. Numerical results obtained using three popular datasets
demonstrate that the proposed framework can outperform a state-of-the-art
framework in terms of convergence speed.
Related papers
- FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity [56.82825745165945]
Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy.
We propose a hierarchical synchronous FL framework, i.e., FedHiSyn, to tackle the problems of straggler effects and outdated models.
We evaluate the proposed framework based on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous settings of devices.
arXiv Detail & Related papers (2022-06-21T17:23:06Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Federated Learning with Downlink Device Selection [92.14944020945846]
We study federated edge learning, where a global model is trained collaboratively using privacy-sensitive data at the edge of a wireless network.
A parameter server (PS) keeps track of the global model and shares it with the wireless edge devices for training using their private local data.
We consider device selection based on downlink channels over which the PS shares the global model with the devices.
arXiv Detail & Related papers (2021-07-07T22:42:39Z) - Brainstorming Generative Adversarial Networks (BGANs): Towards
Multi-Agent Generative Models with Distributed Private Datasets [70.62568022925971]
generative adversarial networks (GANs) must be fed by large datasets that adequately represent the data space.
In many scenarios, the available datasets may be limited and distributed across multiple agents, each of which is seeking to learn the distribution of the data on its own.
In this paper, a novel brainstorming GAN (BGAN) architecture is proposed using which multiple agents can generate real-like data samples while operating in a fully distributed manner.
arXiv Detail & Related papers (2020-02-02T02:58:32Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.