Federated Learning of Models Pre-Trained on Different Features with
Consensus Graphs
- URL: http://arxiv.org/abs/2306.01240v1
- Date: Fri, 2 Jun 2023 02:24:27 GMT
- Title: Federated Learning of Models Pre-Trained on Different Features with
Consensus Graphs
- Authors: Tengfei Ma, Trong Nghia Hoang, Jie Chen
- Abstract summary: Learning an effective global model on private and decentralized datasets has become an increasingly important challenge of machine learning.
We propose a feature fusion approach that extracts local representations from local models and incorporates them into a global representation that improves the prediction performance.
This paper presents solutions to these problems and demonstrates them in real-world applications on time series data such as power grids and traffic networks.
- Score: 19.130197923214123
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning an effective global model on private and decentralized datasets has
become an increasingly important challenge of machine learning when applied in
practice. Existing distributed learning paradigms, such as Federated Learning,
enable this via model aggregation which enforces a strong form of modeling
homogeneity and synchronicity across clients. This is however not suitable to
many practical scenarios. For example, in distributed sensing, heterogeneous
sensors reading data from different views of the same phenomenon would need to
use different models for different data modalities. Local learning therefore
happens in isolation but inference requires merging the local models to achieve
consensus. To enable consensus among local models, we propose a feature fusion
approach that extracts local representations from local models and incorporates
them into a global representation that improves the prediction performance.
Achieving this requires addressing two non-trivial problems. First, we need to
learn an alignment between similar feature components which are arbitrarily
arranged across clients to enable representation aggregation. Second, we need
to learn a consensus graph that captures the high-order interactions between
local feature spaces and how to combine them to achieve a better prediction.
This paper presents solutions to these problems and demonstrates them in
real-world applications on time series data such as power grids and traffic
networks.
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning [50.382793324572845]
Distributed computing involves communication between devices, which requires solving two key problems: efficiency and privacy.
In this paper, we analyze a new method that incorporates the ideas of using data similarity and clients sampling.
To address privacy concerns, we apply the technique of additional noise and analyze its impact on the convergence of the proposed method.
arXiv Detail & Related papers (2024-09-22T00:49:10Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Cross-Silo Federated Learning Across Divergent Domains with Iterative Parameter Alignment [4.95475852994362]
Federated learning is a method for training a machine learning model across remote clients.
We reformulate the typical federated learning setup to learn N models optimized for a common objective.
We find that the technique achieves competitive results on a variety of data partitions compared to state-of-the-art approaches.
arXiv Detail & Related papers (2023-11-08T16:42:14Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - A Personalized Federated Learning Algorithm: an Application in Anomaly
Detection [0.6700873164609007]
Federated Learning (FL) has recently emerged as a promising method to overcome data privacy and transmission issues.
In FL, datasets collected from different devices or sensors are used to train local models (clients) each of which shares its learning with a centralized model (server)
This paper proposes a novel Personalized FedAvg (PC-FedAvg) which aims to control weights communication and aggregation augmented with a tailored learning algorithm to personalize the resulting models at each client.
arXiv Detail & Related papers (2021-11-04T04:57:11Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z) - Robust Federated Learning Through Representation Matching and Adaptive
Hyper-parameters [5.319361976450981]
Federated learning is a distributed, privacy-aware learning scenario which trains a single model on data belonging to several clients.
Current federated learning methods struggle in cases with heterogeneous client-side data distributions.
We propose a novel representation matching scheme that reduces the divergence of local models.
arXiv Detail & Related papers (2019-12-30T20:19:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.