Personalized Federated Learning via Gradient Modulation for
Heterogeneous Text Summarization
- URL: http://arxiv.org/abs/2304.11524v1
- Date: Sun, 23 Apr 2023 03:18:46 GMT
- Title: Personalized Federated Learning via Gradient Modulation for
Heterogeneous Text Summarization
- Authors: Rongfeng Pan, Jianzong Wang, Lingwei Kong, Zhangcheng Huang, Jing Xiao
- Abstract summary: We propose a federated learning text summarization scheme, which allows users to share the global model in a cooperative learning manner without sharing raw data.
FedSUMM can achieve faster model convergence on PFL algorithm for task-specific text summarization.
- Score: 21.825321314169642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text summarization is essential for information aggregation and demands large
amounts of training data. However, concerns about data privacy and security
limit data collection and model training. To eliminate this concern, we propose
a federated learning text summarization scheme, which allows users to share the
global model in a cooperative learning manner without sharing raw data.
Personalized federated learning (PFL) balances personalization and
generalization in the process of optimizing the global model, to guide the
training of local models. However, multiple local data have different
distributions of semantics and context, which may cause the local model to
learn deviated semantic and context information. In this paper, we propose
FedSUMM, a dynamic gradient adapter to provide more appropriate local
parameters for local model. Simultaneously, FedSUMM uses differential privacy
to prevent parameter leakage during distributed training. Experimental evidence
verifies FedSUMM can achieve faster model convergence on PFL algorithm for
task-specific text summarization, and the method achieves superior performance
for different optimization metrics for text summarization.
Related papers
- FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - PRIOR: Personalized Prior for Reactivating the Information Overlooked in
Federated Learning [16.344719695572586]
We propose a novel scheme to inject personalized prior knowledge into a global model in each client.
At the heart of our proposed approach is a framework, the PFL with Bregman Divergence (pFedBreD)
Our method reaches the state-of-the-art performances on 5 datasets and outperforms other methods by up to 3.5% across 8 benchmarks.
arXiv Detail & Related papers (2023-10-13T15:21:25Z) - Federated Deep Equilibrium Learning: Harnessing Compact Global Representations to Enhance Personalization [23.340237814344377]
Federated Learning (FL) has emerged as a groundbreaking distributed learning paradigm enabling clients to train a global model collaboratively without exchanging data.
We introduce FeDEQ, a novel FL framework that incorporates deep equilibrium learning and consensus optimization to harness compact global data representations for efficient personalization.
We show that FeDEQ matches the performance of state-of-the-art personalized FL methods, while significantly reducing communication size by up to 4 times and memory footprint by 1.5 times during training.
arXiv Detail & Related papers (2023-09-27T13:48:12Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.