Homomorphically encrypted gradient descent algorithms for quadratic programming
- URL: http://arxiv.org/abs/2309.01559v1
- Date: Mon, 4 Sep 2023 12:25:14 GMT
- Title: Homomorphically encrypted gradient descent algorithms for quadratic programming
- Authors: André Bertolace, Konstantinos Gatsis, Kostas Margellos,
- Abstract summary: We numerically analyze the applicability of gradient descent algorithms to solve quadratic programming in a homomorphic encryption setup.
The paper shows firsthand the feasibility of homomorphically encrypted gradient descent algorithms.
- Score: 3.185870720907055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we evaluate the different fully homomorphic encryption schemes, propose an implementation, and numerically analyze the applicability of gradient descent algorithms to solve quadratic programming in a homomorphic encryption setup. The limit on the multiplication depth of homomorphic encryption circuits is a major challenge for iterative procedures such as gradient descent algorithms. Our analysis not only quantifies these limitations on prototype examples, thus serving as a benchmark for future investigations, but also highlights additional trade-offs like the ones pertaining the choice of gradient descent or accelerated gradient descent methods, opening the road for the use of homomorphic encryption techniques in iterative procedures widely used in optimization based control. In addition, we argue that, among the available homomorphic encryption schemes, the one adopted in this work, namely CKKS, is the only suitable scheme for implementing gradient descent algorithms. The choice of the appropriate step size is crucial to the convergence of the procedure. The paper shows firsthand the feasibility of homomorphically encrypted gradient descent algorithms.
Related papers
- Rethinking PGD Attack: Is Sign Function Necessary? [131.6894310945647]
We present a theoretical analysis of how such sign-based update algorithm influences step-wise attack performance.
We propose a new raw gradient descent (RGD) algorithm that eliminates the use of sign.
The effectiveness of the proposed RGD algorithm has been demonstrated extensively in experiments.
arXiv Detail & Related papers (2023-12-03T02:26:58Z) - Random coordinate descent: a simple alternative for optimizing parameterized quantum circuits [4.112419132722306]
This paper introduces a random coordinate descent algorithm as a practical and easy-to-implement alternative to the full gradient descent algorithm.
Motivated by the behavior of measurement noise in the practical optimization of parameterized quantum circuits, this paper presents an optimization problem setting amenable to analysis.
arXiv Detail & Related papers (2023-10-31T18:55:45Z) - Adaptive Proximal Gradient Method for Convex Optimization [18.681222155879656]
We explore two fundamental first-order algorithms in convex optimization, namely gradient descent (GD) and proximal gradient method (ProxGD)
Our focus is on making these algorithms entirely adaptive by leveraging local curvature information of smooth functions.
We propose adaptive versions of GD and ProxGD that are based on observed gradient differences and, thus, have no added computational costs.
arXiv Detail & Related papers (2023-08-04T11:37:08Z) - Stochastic Ratios Tracking Algorithm for Large Scale Machine Learning
Problems [0.7614628596146599]
We propose a novel algorithm for adaptive step length selection in the classical SGD framework.
Under reasonable conditions, the algorithm produces step lengths in line with well-established theoretical requirements.
We show that the algorithm can generate step lengths comparable to the best step length obtained from manual tuning.
arXiv Detail & Related papers (2023-05-17T06:22:11Z) - Dynamical softassign and adaptive parameter tuning for graph matching [0.7456521449098222]
We study a unified framework for graph matching problems called the constrained gradient algorithms.
Our contributed adaptive step size parameter can guarantee the underlying algorithms' convergence.
We propose a novel graph matching algorithm: the softassign constrained gradient method.
arXiv Detail & Related papers (2022-08-17T11:25:03Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Differentially Private Accelerated Optimization Algorithms [0.7874708385247353]
We present two classes of differentially private optimization algorithms.
The first algorithm is inspired by Polyak's heavy ball method.
The second class of algorithms are based on Nesterov's accelerated gradient method.
arXiv Detail & Related papers (2020-08-05T08:23:01Z) - Accelerated Message Passing for Entropy-Regularized MAP Inference [89.15658822319928]
Maximum a posteriori (MAP) inference in discrete-valued random fields is a fundamental problem in machine learning.
Due to the difficulty of this problem, linear programming (LP) relaxations are commonly used to derive specialized message passing algorithms.
We present randomized methods for accelerating these algorithms by leveraging techniques that underlie classical accelerated gradient.
arXiv Detail & Related papers (2020-07-01T18:43:32Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.