Subgraph Federated Learning via Spectral Methods
- URL: http://arxiv.org/abs/2510.25657v1
- Date: Wed, 29 Oct 2025 16:22:32 GMT
- Title: Subgraph Federated Learning via Spectral Methods
- Authors: Javad Aliakbari, Johan Östman, Ashkan Panahi, Alexandre Graell i Amat,
- Abstract summary: FedLap is a novel framework that captures inter-node dependencies while ensuring privacy and scalability.<n>We provide a formal analysis of the privacy of FedLap, demonstrating that it preserves privacy.
- Score: 52.40322201034717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of federated learning (FL) with graph-structured data distributed across multiple clients. In particular, we address the prevalent scenario of interconnected subgraphs, where interconnections between clients significantly influence the learning process. Existing approaches suffer from critical limitations, either requiring the exchange of sensitive node embeddings, thereby posing privacy risks, or relying on computationally-intensive steps, which hinders scalability. To tackle these challenges, we propose FedLap, a novel framework that leverages global structure information via Laplacian smoothing in the spectral domain to effectively capture inter-node dependencies while ensuring privacy and scalability. We provide a formal analysis of the privacy of FedLap, demonstrating that it preserves privacy. Notably, FedLap is the first subgraph FL scheme with strong privacy guarantees. Extensive experiments on benchmark datasets demonstrate that FedLap achieves competitive or superior utility compared to existing techniques.
Related papers
- FedSSP: Federated Graph Learning with Spectral Knowledge and Personalized Preference [31.796411806840087]
Federated Graph Learning (pFGL) facilitates the decentralized training of Graph Neural Networks (GNNs) without compromising privacy.
Previous pFGL methods incorrectly share non-generic knowledge globally and fail to tailor personalized solutions locally.
We propose our pFGL framework FedSSP which Shares generic Spectral knowledge while satisfying graph Preferences.
arXiv Detail & Related papers (2024-10-26T07:09:27Z) - Federated Hypergraph Learning with Local Differential Privacy: Toward Privacy-Aware Hypergraph Structure Completion [9.163655213173067]
FedHGL is a first-of-its-kind framework for federated hypergraph learning on disjoint and privacy-constrained hypergraph partitions.<n>We develop FedHGL, a first-of-its-kind framework for federated hypergraph learning on disjoint and privacy-constrained hypergraph partitions.
arXiv Detail & Related papers (2024-08-09T16:31:41Z) - Federated Learning on Riemannian Manifolds with Differential Privacy [8.75592575216789]
A malicious adversary can potentially infer sensitive information through various means.
We propose a generic private FL framework defined on the differential privacy (DP) technique.
We analyze the privacy guarantee while establishing the convergence properties.
Numerical simulations are performed on synthetic and real-world datasets to showcase the efficacy of the proposed PriRFed approach.
arXiv Detail & Related papers (2024-04-15T12:32:20Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Differentially Private Wireless Federated Learning Using Orthogonal
Sequences [56.52483669820023]
We propose a privacy-preserving uplink over-the-air computation (AirComp) method, termed FLORAS.
We prove that FLORAS offers both item-level and client-level differential privacy guarantees.
A new FL convergence bound is derived which, combined with the privacy guarantees, allows for a smooth tradeoff between the achieved convergence rate and differential privacy levels.
arXiv Detail & Related papers (2023-06-14T06:35:10Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - On Privacy and Personalization in Cross-Silo Federated Learning [39.031422430404405]
In this work, we consider the application of differential privacy in cross-silo learning (FL)
We show that mean-regularized multi-task learning (MR-MTL) is a strong baseline for cross-silo FL.
We provide a thorough empirical study of competing methods as well as a theoretical characterization of MR-MTL for a mean estimation problem.
arXiv Detail & Related papers (2022-06-16T03:26:48Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.