Learning to Collaborate Over Graphs: A Selective Federated Multi-Task Learning Approach
- URL: http://arxiv.org/abs/2506.10102v1
- Date: Wed, 11 Jun 2025 18:39:18 GMT
- Title: Learning to Collaborate Over Graphs: A Selective Federated Multi-Task Learning Approach
- Authors: Ahmed Elbakary, Chaouki Ben Issaid, Mehdi Bennis,
- Abstract summary: We present a novel multi-task learning method that leverages cross-client similarity to enable personalized learning for each client.<n>We propose a communication-efficient scheme that introduces a feature anchor, a compact vector representation that summarizes the features learned from the client's local classes.<n>In addition, the clients share the classification heads, a lightweight linear layer, and perform a graph-based regularization to enable collaboration among clients.
- Score: 34.756818299081736
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a novel federated multi-task learning method that leverages cross-client similarity to enable personalized learning for each client. To avoid transmitting the entire model to the parameter server, we propose a communication-efficient scheme that introduces a feature anchor, a compact vector representation that summarizes the features learned from the client's local classes. This feature anchor is shared with the server to account for local clients' distribution. In addition, the clients share the classification heads, a lightweight linear layer, and perform a graph-based regularization to enable collaboration among clients. By modeling collaboration between clients as a dynamic graph and continuously updating and refining this graph, we can account for any drift from the clients. To ensure beneficial knowledge transfer and prevent negative collaboration, we leverage a community detection-based approach that partitions this dynamic graph into homogeneous communities, maximizing the sum of task similarities, represented as the graph edges' weights, within each community. This mechanism restricts collaboration to highly similar clients within their formed communities, ensuring positive interaction and preserving personalization. Extensive experiments on two heterogeneous datasets demonstrate that our method significantly outperforms state-of-the-art baselines. Furthermore, we show that our method exhibits superior computation and communication efficiency and promotes fairness across clients.
Related papers
- Federated Learning with Graph-Based Aggregation for Traffic Forecasting [0.0]
Federated Learning (FL) is a suitable approach for collaboratively training models without sharing raw data.<n>Standard FL methods, such as Federated Averaging (FedAvg), assume that clients are independent.<n>We propose a lightweight graph-aware FL approach that blends the simplicity of FedAvg with key ideas from graph learning.
arXiv Detail & Related papers (2025-07-13T21:41:42Z) - Byzantine Resilient Federated Multi-Task Representation Learning [1.6114012813668932]
We propose BR-MTRL, a Byzantine-resilient multi-task representation learning framework that handles faulty or malicious agents.<n>Our approach leverages representation learning through a shared neural network model, where all clients share fixed layers, except for a client-specific final layer.
arXiv Detail & Related papers (2025-03-24T23:26:28Z) - FedAGHN: Personalized Federated Learning with Attentive Graph HyperNetworks [19.57993976799076]
PFL aims to address the statistical heterogeneity of data across clients by learning the personalized model for each client.<n>We propose Personalized Federated Learning with Attentive Graph HyperNetworks (FedAGHN)<n>FedAGHN captures fine-grained collaborative relationships and generates client-specific personalized initial models.
arXiv Detail & Related papers (2025-01-24T10:48:30Z) - Personalized Federated Knowledge Graph Embedding with Client-Wise Relation Graph [49.66272783945571]
We propose Personalized Federated knowledge graph Embedding with client-wise relation graph.
PFedEG learns personalized supplementary knowledge for each client by amalgamating entity embedding from its neighboring clients.
We conduct extensive experiments on four benchmark datasets to evaluate our method against state-of-the-art models.
arXiv Detail & Related papers (2024-06-17T17:44:53Z) - Personalized Federated Learning via Sequential Layer Expansion in Representation Learning [0.0]
Federated learning ensures the privacy of clients by conducting distributed training on individual client devices and sharing only the model weights with a central server.
We propose a new representation learning-based approach that suggests decoupling the entire deep learning model into more densely divided parts with the application of suitable scheduling methods.
arXiv Detail & Related papers (2024-04-27T06:37:19Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Communication-Efficient Federated Learning with Accelerated Client Gradient [46.81082897703729]
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.
We propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.
We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency.
arXiv Detail & Related papers (2022-01-10T05:31:07Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.