Privacy-Preserving Logistic Regression Training with A Faster Gradient Variant
- URL: http://arxiv.org/abs/2201.10838v6
- Date: Fri, 7 Jun 2024 03:19:17 GMT
- Title: Privacy-Preserving Logistic Regression Training with A Faster Gradient Variant
- Authors: John Chiang,
- Abstract summary: We propose a faster gradient variant called $textttquadratic gradient$ for privacy-preserving logistic regression training.
Experiments show that the enhanced methods have a state-of-the-art performance in convergence speed.
There is a promising chance that $textttquadratic gradient$ could be used to enhance other first-order gradient methods for general numerical optimization problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Logistic regression training over encrypted data has been an attractive idea to security concerns for years. In this paper, we propose a faster gradient variant called $\texttt{quadratic gradient}$ for privacy-preserving logistic regression training. The core of $\texttt{quadratic gradient}$ can be seen as an extension of the simplified fixed Hessian. We enhance Nesterov's accelerated gradient (NAG) and Adaptive Gradient Algorithm (Adagrad) respectively with $\texttt{quadratic gradient}$ and evaluate the enhanced algorithms on several datasets. %gradient $ascent$ methods with this gradient variant on the gene dataset provided by the 2017 iDASH competition and other datasets. Experiments show that the enhanced methods have a state-of-the-art performance in convergence speed compared to the raw first-order gradient methods. We then adopt the enhanced NAG method to implement homomorphic logistic regression training, obtaining a comparable result by only $3$ iterations. There is a promising chance that $\texttt{quadratic gradient}$ could be used to enhance other first-order gradient methods for general numerical optimization problems.
Related papers
- Privacy-Preserving Logistic Regression Training on Large Datasets [0.0]
We propose an efficient algorithm for logistic regression training on large encrypted data using Homomorphic Encryption (HE)
We also implement the full-batch version of their method when the dataset is so large that it has to be encrypted in the mini-batch manner.
arXiv Detail & Related papers (2024-06-19T05:19:20Z) - Convergence Analysis of Adaptive Gradient Methods under Refined Smoothness and Noise Assumptions [18.47705532817026]
Adaptive gradient methods are arguably the most successful optimization algorithms for neural network.
We show that adaptive gradient methods can potentially shave a factor Adad-ell/ell$ geometry.
arXiv Detail & Related papers (2024-06-07T02:55:57Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Multinomial Logistic Regression Algorithms via Quadratic Gradient [0.0]
We propose an enhanced Adaptive Gradient Algorithm (Adagrad) that can accelerate the original Adagrad method.
We test the enhanced NAG method and the enhanced Adagrad method on some multiclass-problem datasets.
arXiv Detail & Related papers (2022-08-14T11:00:27Z) - Gradient Correction beyond Gradient Descent [63.33439072360198]
gradient correction is apparently the most crucial aspect for the training of a neural network.
We introduce a framework (textbfGCGD) to perform gradient correction.
Experiment results show that our gradient correction framework can effectively improve the gradient quality to reduce training epochs by $sim$ 20% and also improve the network performance.
arXiv Detail & Related papers (2022-03-16T01:42:25Z) - Restarted Nonconvex Accelerated Gradient Descent: No More
Polylogarithmic Factor in the $O(\epsilon^{-7/4})$ Complexity [70.65867695317633]
We propose two simple accelerated gradient methods, restarted gradient descent (AGD) and restarted ball (HB) method.
We establish that our methods achieve an $frac1epsilon)$ number of gradient iterations.
Our algorithms are simple in the sense that they only consist of Nestov's classical AGD orak's HB as well as a restart mechanism.
arXiv Detail & Related papers (2022-01-27T10:04:04Z) - Adapting Stepsizes by Momentumized Gradients Improves Optimization and
Generalization [89.66571637204012]
textscAdaMomentum on vision, and achieves state-the-art results consistently on other tasks including language processing.
textscAdaMomentum on vision, and achieves state-the-art results consistently on other tasks including language processing.
textscAdaMomentum on vision, and achieves state-the-art results consistently on other tasks including language processing.
arXiv Detail & Related papers (2021-06-22T03:13:23Z) - Large Scale Private Learning via Low-rank Reparametrization [77.38947817228656]
We propose a reparametrization scheme to address the challenges of applying differentially private SGD on large neural networks.
We are the first able to apply differential privacy on the BERT model and achieve an average accuracy of $83.9%$ on four downstream tasks.
arXiv Detail & Related papers (2021-06-17T10:14:43Z) - Decreasing scaling transition from adaptive gradient descent to
stochastic gradient descent [1.7874193862154875]
We propose a decreasing scaling transition from adaptive gradient descent to gradient descent method DSTAda.
Our experimental results show that DSTAda has a faster speed, higher accuracy, and better stability and robustness.
arXiv Detail & Related papers (2021-06-12T11:28:58Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.