Graph Federated Learning with Hidden Representation Sharing
- URL: http://arxiv.org/abs/2212.12158v1
- Date: Fri, 23 Dec 2022 05:44:27 GMT
- Title: Graph Federated Learning with Hidden Representation Sharing
- Authors: Shuang Wu, Mingxuan Zhang, Yuantong Li, Carl Yang, Pan Li
- Abstract summary: Learning on Graphs (LoG) is widely used in multi-client systems when each client has insufficient local data.
Federated Learning (FL) requires models to be trained in a multi-client system and sharing of raw data among clients.
In this work, we first formulate the Graph Federated Learning (GFL) problem that unifies LoG FL in multi-client systems and then propose sharing hidden representation.
- Score: 33.01999333117515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning on Graphs (LoG) is widely used in multi-client systems when each
client has insufficient local data, and multiple clients have to share their
raw data to learn a model of good quality. One scenario is to recommend items
to clients with limited historical data and sharing similar preferences with
other clients in a social network. On the other hand, due to the increasing
demands for the protection of clients' data privacy, Federated Learning (FL)
has been widely adopted: FL requires models to be trained in a multi-client
system and restricts sharing of raw data among clients. The underlying
potential data-sharing conflict between LoG and FL is under-explored and how to
benefit from both sides is a promising problem. In this work, we first
formulate the Graph Federated Learning (GFL) problem that unifies LoG and FL in
multi-client systems and then propose sharing hidden representation instead of
the raw data of neighbors to protect data privacy as a solution. To overcome
the biased gradient problem in GFL, we provide a gradient estimation method and
its convergence analysis under the non-convex objective. In experiments, we
evaluate our method in classification tasks on graphs. Our experiment shows a
good match between our theory and the practice.
Related papers
- Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - FedGeo: Privacy-Preserving User Next Location Prediction with Federated
Learning [27.163370946895697]
A User Next Location Prediction (UNLP) task, which predicts the next location that a user will move to given his/her trajectory, is an indispensable task for a wide range of applications.
Previous studies using large-scale trajectory datasets in a single server have achieved remarkable performance in UNLP task.
In real-world applications, legal and ethical issues have been raised regarding privacy concerns leading to restrictions against sharing human trajectory datasets to any other server.
arXiv Detail & Related papers (2023-12-06T01:43:58Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Game of Gradients: Mitigating Irrelevant Clients in Federated Learning [3.2095659532757916]
Federated learning (FL) deals with multiple clients participating in collaborative training of a machine learning model under the orchestration of a central server.
In this setup, each client's data is private to itself and is not transferable to other clients or the server.
We refer to these problems as Federated Relevant Client Selection (FRCS)
arXiv Detail & Related papers (2021-10-23T16:34:42Z) - Unifying Distillation with Personalization in Federated Learning [1.8262547855491458]
Federated learning (FL) is a decentralized privacy-preserving learning technique in which clients learn a joint collaborative model through a central aggregator without sharing their data.
In this setting, all clients learn a single common predictor (FedAvg), which does not generalize well on each client's local data due to the statistical data heterogeneity among clients.
In this paper, we address this problem with PersFL, a two-stage personalized learning algorithm.
In the first stage, PersFL finds the optimal teacher model of each client during the FL training phase. In the second stage, PersFL distills the useful knowledge from
arXiv Detail & Related papers (2021-05-31T17:54:29Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.