Private Multi-Task Learning: Formulation and Applications to Federated
Learning
- URL: http://arxiv.org/abs/2108.12978v3
- Date: Tue, 17 Oct 2023 08:39:07 GMT
- Title: Private Multi-Task Learning: Formulation and Applications to Federated
Learning
- Authors: Shengyuan Hu, Zhiwei Steven Wu, Virginia Smith
- Abstract summary: Multi-task learning is relevant for privacy-sensitive applications in areas such as healthcare, finance, and IoT computing.
We formalize notions of client-level privacy for MTL via joint differential privacy (JDP), a relaxation of differential privacy for mechanism design and distributed optimization.
We then propose an algorithm for mean-regularized MTL, an objective commonly used for applications in personalized federated learning.
- Score: 44.60519521554582
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many problems in machine learning rely on multi-task learning (MTL), in which
the goal is to solve multiple related machine learning tasks simultaneously.
MTL is particularly relevant for privacy-sensitive applications in areas such
as healthcare, finance, and IoT computing, where sensitive data from multiple,
varied sources are shared for the purpose of learning. In this work, we
formalize notions of client-level privacy for MTL via joint differential
privacy (JDP), a relaxation of differential privacy for mechanism design and
distributed optimization. We then propose an algorithm for mean-regularized
MTL, an objective commonly used for applications in personalized federated
learning, subject to JDP. We analyze our objective and solver, providing
certifiable guarantees on both privacy and utility. Empirically, we find that
our method provides improved privacy/utility trade-offs relative to global
baselines across common federated learning benchmarks.
Related papers
- Differentially Private Active Learning: Balancing Effective Data Selection and Privacy [11.716423801223776]
We introduce differentially private active learning (DP-AL) for standard learning settings.
We demonstrate that naively integrating DP-SGD training into AL presents substantial challenges in privacy budget allocation and data utilization.
Our experiments on vision and natural language processing tasks show that DP-AL can improve performance for specific datasets and model architectures.
arXiv Detail & Related papers (2024-10-01T09:34:06Z) - Privacy-aware Berrut Approximated Coded Computing for Federated Learning [1.2084539012992408]
We propose a solution to guarantee privacy in Federated Learning schemes.
Our proposal is based on the Berrut Approximated Coded Computing, adapted to a Secret Sharing configuration.
arXiv Detail & Related papers (2024-05-02T20:03:13Z) - Federated Multi-Objective Learning [22.875284692358683]
We propose a new federated multi-objective learning (FMOL) framework with multiple clients.
Our FMOL framework allows a different set of objective functions across different clients to support a wide range of applications.
For this FMOL framework, we propose two new federated multi-task optimization (FMOO) algorithms called federated multi-gradient descent averaging (FSMGDA) and federated multi-gradient descent averaging (FSMGDA)
arXiv Detail & Related papers (2023-10-15T15:45:51Z) - FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large
Language Models in Federated Learning [70.38817963253034]
This paper first discusses these challenges of federated fine-tuning LLMs, and introduces our package FS-LLM as a main contribution.
We provide comprehensive federated parameter-efficient fine-tuning algorithm implementations and versatile programming interfaces for future extension in FL scenarios.
We conduct extensive experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art parameter-efficient fine-tuning algorithms in FL settings.
arXiv Detail & Related papers (2023-09-01T09:40:36Z) - Collaborating Heterogeneous Natural Language Processing Tasks via
Federated Learning [55.99444047920231]
The proposed ATC framework achieves significant improvements compared with various baseline methods.
We conduct extensive experiments on six widely-used datasets covering both Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks.
arXiv Detail & Related papers (2022-12-12T09:27:50Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Multi-Task and Transfer Learning for Federated Learning Applications [5.224306534441244]
Federated learning enables applications benefiting distributed and private datasets of a large number of potential data-holding clients.
We propose to train a deep neural network model with more generalized layers closer to the input and more personalized layers to the output.
We provide simulation results to highlight particular scenarios in which meta-learning-based federated learning proves to be useful.
arXiv Detail & Related papers (2022-07-17T11:48:11Z) - On Privacy and Personalization in Cross-Silo Federated Learning [39.031422430404405]
In this work, we consider the application of differential privacy in cross-silo learning (FL)
We show that mean-regularized multi-task learning (MR-MTL) is a strong baseline for cross-silo FL.
We provide a thorough empirical study of competing methods as well as a theoretical characterization of MR-MTL for a mean estimation problem.
arXiv Detail & Related papers (2022-06-16T03:26:48Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.