Graph Federated Learning Based on the Decentralized Framework
- URL: http://arxiv.org/abs/2307.09801v1
- Date: Wed, 19 Jul 2023 07:40:51 GMT
- Title: Graph Federated Learning Based on the Decentralized Framework
- Authors: Peilin Liu, Yanni Tang, Mingyue Zhang, and Wu Chen
- Abstract summary: Graph-federated learning is mainly based on the classical federated learning framework i.e., the Client-Server framework.
We introduce the decentralized framework to graph-federated learning.
The proposed method is compared with FedAvg, Fedprox, GCFL, and GCFL+ to verify the effectiveness of the proposed method.
- Score: 8.619889123184649
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph learning has a wide range of applications in many scenarios, which
require more need for data privacy. Federated learning is an emerging
distributed machine learning approach that leverages data from individual
devices or data centers to improve the accuracy and generalization of the
model, while also protecting the privacy of user data. Graph-federated learning
is mainly based on the classical federated learning framework i.e., the
Client-Server framework. However, the Client-Server framework faces problems
such as a single point of failure of the central server and poor scalability of
network topology. First, we introduce the decentralized framework to
graph-federated learning. Second, determine the confidence among nodes based on
the similarity of data among nodes, subsequently, the gradient information is
then aggregated by linear weighting based on confidence. Finally, the proposed
method is compared with FedAvg, Fedprox, GCFL, and GCFL+ to verify the
effectiveness of the proposed method. Experiments demonstrate that the proposed
method outperforms other methods.
Related papers
- On Homomorphic Encryption Based Strategies for Class Imbalance in Federated Learning [4.322339935902437]
Class imbalance in training datasets can lead to bias and poor generalization in machine learning models.
We propose FLICKER, a privacy preserving framework to address issues related to global class imbalance in federated learning.
arXiv Detail & Related papers (2024-10-28T16:35:40Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Lumos: Heterogeneity-aware Federated Graph Learning over Decentralized
Devices [19.27111697495379]
Graph neural networks (GNNs) have been widely deployed in real-world networked applications and systems.
We propose the first federated GNN framework called Lumos that supports supervised and unsupervised learning.
Based on the constructed tree for each client, a decentralized tree-based GNN trainer is proposed to support versatile training.
arXiv Detail & Related papers (2023-03-01T13:27:06Z) - Differentially Private Vertical Federated Clustering [13.27934054846057]
In many applications, multiple parties have private data regarding the same set of users but on disjoint sets of attributes.
To enable model learning while protecting the privacy of the data subjects, we need vertical federated learning (VFL) techniques.
The algorithm proposed in this paper is the first practical solution for differentially private vertical federated k-means clustering.
arXiv Detail & Related papers (2022-08-02T19:23:48Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.