Optimizing Privacy, Utility and Efficiency in Constrained
Multi-Objective Federated Learning
- URL: http://arxiv.org/abs/2305.00312v4
- Date: Tue, 9 May 2023 14:29:09 GMT
- Title: Optimizing Privacy, Utility and Efficiency in Constrained
Multi-Objective Federated Learning
- Authors: Yan Kang, Hanlin Gu, Xingxing Tang, Yuanqin He, Yuzhu Zhang, Jinnan
He, Yuxing Han, Lixin Fan, Kai Chen, Qiang Yang
- Abstract summary: We develop two improved CMOFL algorithms based on NSGA-II and PSL.
We design specific measurements of privacy leakage, utility loss, and training cost for three privacy protection mechanisms.
Empirical experiments conducted under each of the three protection mechanisms demonstrate the effectiveness of our proposed algorithms.
- Score: 20.627157142499378
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventionally, federated learning aims to optimize a single objective,
typically the utility. However, for a federated learning system to be
trustworthy, it needs to simultaneously satisfy multiple/many objectives, such
as maximizing model performance, minimizing privacy leakage and training cost,
and being robust to malicious attacks. Multi-Objective Optimization (MOO)
aiming to optimize multiple conflicting objectives at the same time is quite
suitable for solving the optimization problem of Trustworthy Federated Learning
(TFL). In this paper, we unify MOO and TFL by formulating the problem of
constrained multi-objective federated learning (CMOFL). Under this formulation,
existing MOO algorithms can be adapted to TFL straightforwardly. Different from
existing CMOFL works focusing on utility, efficiency, fairness, and robustness,
we consider optimizing privacy leakage along with utility loss and training
cost, the three primary objectives of a TFL system. We develop two improved
CMOFL algorithms based on NSGA-II and PSL, respectively, for effectively and
efficiently finding Pareto optimal solutions, and we provide theoretical
analysis on their convergence. We design specific measurements of privacy
leakage, utility loss, and training cost for three privacy protection
mechanisms: Randomization, BatchCrypt (An efficient version of homomorphic
encryption), and Sparsification. Empirical experiments conducted under each of
the three protection mechanisms demonstrate the effectiveness of our proposed
algorithms.
Related papers
- Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System [75.25394449773052]
Large Language Model (LLM) based multi-agent systems (MAS) show remarkable potential in collaborative problem-solving.
Yet they still face critical challenges: low communication efficiency, poor scalability, and a lack of effective parameter-updating optimization methods.
We present Optima, a novel framework that addresses these issues by significantly enhancing both communication efficiency and task effectiveness.
arXiv Detail & Related papers (2024-10-10T17:00:06Z) - Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning [67.95280175998792]
A novel adversarial imitation learning (GAIL)-powered policy learning approach is proposed for optimizing beamforming, spectrum allocation, and remote user equipment (RUE) association ins.
We employ inverse RL (IRL) to automatically learn reward functions without manual tuning.
We show that the proposed MA-AL method outperforms traditional RL approaches, achieving a $14.6%$ improvement in convergence and reward value.
arXiv Detail & Related papers (2024-09-27T13:05:02Z) - Decoding-Time Language Model Alignment with Multiple Objectives [116.42095026960598]
Existing methods primarily focus on optimizing LMs for a single reward function, limiting their adaptability to varied objectives.
Here, we propose $textbfmulti-objective decoding (MOD)$, a decoding-time algorithm that outputs the next token from a linear combination of predictions.
We show why existing approaches can be sub-optimal even in natural settings and obtain optimality guarantees for our method.
arXiv Detail & Related papers (2024-06-27T02:46:30Z) - A Theoretical Analysis of Efficiency Constrained Utility-Privacy
Bi-Objective Optimization in Federated Learning [23.563789510998333]
Federated learning (FL) enables multiple clients to collaboratively learn a shared model without sharing their individual data.
Differential privacy has emerged as a prevalent technique in FL, safeguarding the privacy of individual user data while impacting utility and training efficiency.
This paper systematically formulates an efficiency-constrained utility-privacy bi-objective optimization problem in DPFL.
arXiv Detail & Related papers (2023-12-27T12:37:55Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - A Meta-learning Framework for Tuning Parameters of Protection Mechanisms
in Trustworthy Federated Learning [27.909662318838873]
Trustworthy Federated Learning (TFL) typically leverages protection mechanisms to guarantee privacy.
We propose a framework that formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction.
arXiv Detail & Related papers (2023-05-28T15:01:18Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Probably Approximately Correct Federated Learning [20.85915650297227]
Federated learning (FL) is a new distributed learning paradigm with privacy, utility, and efficiency as its primary pillars.
Existing research indicates that it is unlikely to simultaneously attain infinitesimal privacy leakage, utility loss, and efficiency.
How to find an optimal trade-off solution is the key consideration when designing the FL algorithm.
arXiv Detail & Related papers (2023-04-10T15:12:34Z) - Trading Off Privacy, Utility and Efficiency in Federated Learning [22.53326117450263]
We formulate and quantify the trade-offs between privacy leakage, utility loss, and efficiency reduction.
We analyze the lower bounds for the privacy leakage, utility loss and efficiency reduction for several widely-adopted protection mechanisms.
arXiv Detail & Related papers (2022-09-01T05:20:04Z) - Leveraging Trust for Joint Multi-Objective and Multi-Fidelity
Optimization [0.0]
This paper investigates a novel approach to Bayesian multi-objective and multi-fidelity (MOMF) optimization.
We suggest the innovative use of a trust metric to support simultaneous optimization of multiple objectives and data sources.
Our methods offer broad applicability in solving simulation problems in fields such as plasma physics and fluid dynamics.
arXiv Detail & Related papers (2021-12-27T20:55:26Z) - Optimization-Inspired Learning with Architecture Augmentations and
Control Mechanisms for Low-Level Vision [74.9260745577362]
This paper proposes a unified optimization-inspired learning framework to aggregate Generative, Discriminative, and Corrective (GDC) principles.
We construct three propagative modules to effectively solve the optimization models with flexible combinations.
Experiments across varied low-level vision tasks validate the efficacy and adaptability of GDC.
arXiv Detail & Related papers (2020-12-10T03:24:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.