Federated Coordinate Descent for Privacy-Preserving Multiparty Linear
Regression
- URL: http://arxiv.org/abs/2209.07702v2
- Date: Mon, 19 Sep 2022 08:28:36 GMT
- Title: Federated Coordinate Descent for Privacy-Preserving Multiparty Linear
Regression
- Authors: Xinlin Leng, Chenxu Li, Hongtao Wang
- Abstract summary: We present Federated Coordinate Descent, a new distributed scheme called FCD, to address this issue securely under multiparty scenarios.
Specifically, through secure aggregation and added perturbations, our scheme guarantees that: (1) no local information is leaked to other parties, and (2) global model parameters are not exposed to cloud servers.
We show that the FCD scheme fills the gap of multiparty secure Coordinate Descent methods and is applicable for general linear regressions, including linear, ridge and lasso regressions.
- Score: 0.5049057348282932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed privacy-preserving regression schemes have been developed and
extended in various fields, where multiparty collaboratively and privately run
optimization algorithms, e.g., Gradient Descent, to learn a set of optimal
parameters. However, traditional Gradient-Descent based methods fail to solve
problems which contains objective functions with L1 regularization, such as
Lasso regression. In this paper, we present Federated Coordinate Descent, a new
distributed scheme called FCD, to address this issue securely under multiparty
scenarios. Specifically, through secure aggregation and added perturbations,
our scheme guarantees that: (1) no local information is leaked to other
parties, and (2) global model parameters are not exposed to cloud servers. The
added perturbations can eventually be eliminated by each party to derive a
global model with high performance. We show that the FCD scheme fills the gap
of multiparty secure Coordinate Descent methods and is applicable for general
linear regressions, including linear, ridge and lasso regressions. Theoretical
security analysis and experimental results demonstrate that FCD can be
performed effectively and efficiently, and provide as low MAE measure as
centralized methods under tasks of three types of linear regressions on
real-world UCI datasets.
Related papers
- A-FedPD: Aligning Dual-Drift is All Federated Primal-Dual Learning Needs [57.35402286842029]
We propose a novel Aligned Dual Dual (A-FedPD) method, which constructs virtual dual align global and local clients.
We provide a comprehensive analysis of the A-FedPD method's efficiency for those protracted unicipated security consensus.
arXiv Detail & Related papers (2024-09-27T17:00:32Z) - Federated Smoothing Proximal Gradient for Quantile Regression with Non-Convex Penalties [3.269165283595478]
Distributed sensors in the internet-of-things (IoT) generate vast amounts of sparse data.
We propose a federated smoothing proximal gradient (G) algorithm that integrates a smoothing mechanism with the view, thereby both precision and computational speed.
arXiv Detail & Related papers (2024-08-10T21:50:19Z) - LFFR: Logistic Function For (multi-output) Regression [0.0]
We build upon previous work on privacy-preserving regression to address multi-output regression problems.
We adapt our novel LFFR algorithm, initially designed for single-output logistic regression, to handle multiple outputs.
Evaluations on multiple real-world datasets demonstrate the effectiveness of our multi-output LFFR algorithm.
arXiv Detail & Related papers (2024-07-30T20:52:38Z) - Joint Demonstration and Preference Learning Improves Policy Alignment with Human Feedback [58.049113055986375]
We develop a single stage approach named Alignment with Integrated Human Feedback (AIHF) to train reward models and the policy.
The proposed approach admits a suite of efficient algorithms, which can easily reduce to, and leverage, popular alignment algorithms.
We demonstrate the efficiency of the proposed solutions with extensive experiments involving alignment problems in LLMs and robotic control problems in MuJoCo.
arXiv Detail & Related papers (2024-06-11T01:20:53Z) - Adaptive debiased SGD in high-dimensional GLMs with streaming data [4.704144189806667]
We introduce a novel approach to online inference in high-dimensional generalized linear models.
Our method operates in a single-pass mode, significantly reducing both time and space complexity.
We demonstrate that our method, termed the Approximated Debiased Lasso (ADL), not only mitigates the need for the bounded individual probability condition but also significantly improves numerical performance.
arXiv Detail & Related papers (2024-05-28T15:36:48Z) - GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Provable Offline Preference-Based Reinforcement Learning [95.00042541409901]
We investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback.
We consider the general reward setting where the reward can be defined over the whole trajectory.
We introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability.
arXiv Detail & Related papers (2023-05-24T07:11:26Z) - Offline Policy Optimization in RL with Variance Regularizaton [142.87345258222942]
We propose variance regularization for offline RL algorithms, using stationary distribution corrections.
We show that by using Fenchel duality, we can avoid double sampling issues for computing the gradient of the variance regularizer.
The proposed algorithm for offline variance regularization (OVAR) can be used to augment any existing offline policy optimization algorithms.
arXiv Detail & Related papers (2022-12-29T18:25:01Z) - Communication-Efficient Distributed Quantile Regression with Optimal
Statistical Guarantees [2.064612766965483]
We address the problem of how to achieve optimal inference in distributed quantile regression without stringent scaling conditions.
The difficulties are resolved through a double-smoothing approach that is applied to the local (at each data source) and global objective functions.
Despite the reliance on a delicate combination of local and global smoothing parameters, the quantile regression model is fully parametric.
arXiv Detail & Related papers (2021-10-25T17:09:59Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z) - Robust Locality-Aware Regression for Labeled Data Classification [5.432221650286726]
We propose a new discriminant feature extraction framework, namely Robust Locality-Aware Regression (RLAR)
In our model, we introduce a retargeted regression to perform the marginal representation learning adaptively instead of using the general average inter-class margin.
To alleviate the disturbance of outliers and prevent overfitting, we measure the regression term and locality-aware term together with the regularization term by the L2,1 norm.
arXiv Detail & Related papers (2020-06-15T11:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.