A Theoretical Perspective on Differentially Private Federated Multi-task
Learning
- URL: http://arxiv.org/abs/2011.07179v1
- Date: Sat, 14 Nov 2020 00:53:16 GMT
- Title: A Theoretical Perspective on Differentially Private Federated Multi-task
Learning
- Authors: Huiwen Wu and Cen Chen and Li Wang
- Abstract summary: collaborative learning models need to be developed with respect to both privacy and utility concerns.
We propose a new federated multi-task for effective parameter transfer differential privacy to protect at the client level.
We are the first to provide both privacy utility guarantees for such a proposed algorithm.
- Score: 12.935153199667987
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the era of big data, the need to expand the amount of data through data
sharing to improve model performance has become increasingly compelling. As a
result, effective collaborative learning models need to be developed with
respect to both privacy and utility concerns. In this work, we propose a new
federated multi-task learning method for effective parameter transfer with
differential privacy to protect gradients at the client level. Specifically,
the lower layers of the networks are shared across all clients to capture
transferable feature representation, while top layers of the network are
task-specific for on-client personalization. Our proposed algorithm naturally
resolves the statistical heterogeneity problem in federated networks. We are,
to the best of knowledge, the first to provide both privacy and utility
guarantees for such a proposed federated algorithm. The convergences are proved
for the cases with Lipschitz smooth objective functions under the non-convex,
convex, and strongly convex settings. Empirical experiment results on different
datasets have been conducted to demonstrate the effectiveness of the proposed
algorithm and verify the implications of the theoretical findings.
Related papers
- Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - UNIDEAL: Curriculum Knowledge Distillation Federated Learning [17.817181326740698]
Federated Learning (FL) has emerged as a promising approach to enable collaborative learning among multiple clients.
In this paper, we present UNI, a novel FL algorithm specifically designed to tackle the challenges of cross-domain scenarios.
Our results demonstrate that UNI achieves superior performance in terms of both model accuracy and communication efficiency.
arXiv Detail & Related papers (2023-09-16T11:30:29Z) - Regularization Through Simultaneous Learning: A Case Study on Plant
Classification [0.0]
This paper introduces Simultaneous Learning, a regularization approach drawing on principles of Transfer Learning and Multi-task Learning.
We leverage auxiliary datasets with the target dataset, the UFOP-HVD, to facilitate simultaneous classification guided by a customized loss function.
Remarkably, our approach demonstrates superior performance over models without regularization.
arXiv Detail & Related papers (2023-05-22T19:44:57Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Personalized Decentralized Multi-Task Learning Over Dynamic
Communication Graphs [59.96266198512243]
We propose a decentralized and federated learning algorithm for tasks that are positively and negatively correlated.
Our algorithm uses gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other.
We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset.
arXiv Detail & Related papers (2022-12-21T18:58:24Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - Improving Federated Relational Data Modeling via Basis Alignment and
Weight Penalty [18.096788806121754]
Federated learning (FL) has attracted increasing attention in recent years.
We present a modified version of the graph neural network algorithm that performs federated modeling over Knowledge Graph (KG)
We propose a novel optimization algorithm, named FedAlign, with 1) optimal transportation (OT) for on-client personalization and 2) weight constraint to speed up the convergence.
Empirical results show that our proposed method outperforms the state-of-the-art FL methods, such as FedAVG and FedProx, with better convergence.
arXiv Detail & Related papers (2020-11-23T12:52:18Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.