A Practical Exercise in Adapting SIFT Using FHE Primitives
- URL: http://arxiv.org/abs/2412.09642v1
- Date: Mon, 09 Dec 2024 08:52:57 GMT
- Title: A Practical Exercise in Adapting SIFT Using FHE Primitives
- Authors: Ishwar B Balappanawar, Bhargav Srinivas Kommireddy,
- Abstract summary: An exercise in implementing Scale Invariant Feature Transform using CKKS Fully Homomorphic encryption reveals some glaring limitations in the current FHE paradigm.
These limitations include the lack of a standard comparison operator and certain operations that depend on it.
- Score: 0.0
- License:
- Abstract: An exercise in implementing Scale Invariant Feature Transform using CKKS Fully Homomorphic encryption quickly reveals some glaring limitations in the current FHE paradigm. These limitations include the lack of a standard comparison operator and certain operations that depend on it (like array max, histogram binning etc). We also observe that the existing solutions are either too low level or do not have proper abstractions to implement algorithms like SIFT. In this work, we demonstrate: 1. Methods of adapting regular code to the FHE setting. 2. Alternate implementations of standard algorithms (like array max, histogram binning, etc.) to reduce the multiplicative depth. 3. A novel method of using deferred computations to avoid performing expensive operations such as comparisons in the encrypted domain. Through this exercise, we hope this work acts as a practical guide on how one can adapt algorithms to FHE
Related papers
- A General Online Algorithm for Optimizing Complex Performance Metrics [5.726378955570775]
We introduce and analyze a general online algorithm that can be used in a straightforward way with a variety of complex performance metrics in binary, multi-class, and multi-label classification problems.
The algorithm's update and prediction rules are appealingly simple and computationally efficient without the need to store any past data.
arXiv Detail & Related papers (2024-06-20T21:24:47Z) - Homomorphically encrypted gradient descent algorithms for quadratic programming [3.185870720907055]
We numerically analyze the applicability of gradient descent algorithms to solve quadratic programming in a homomorphic encryption setup.
The paper shows firsthand the feasibility of homomorphically encrypted gradient descent algorithms.
arXiv Detail & Related papers (2023-09-04T12:25:14Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Convergence of ease-controlled Random Reshuffling gradient Algorithms under Lipschitz smoothness [0.0]
We consider the average of a very large number of smooth possibly non-size functions, and we use two widely minibatch frameworks to tackle this problem.
We define ease-controlled modifications of IG/RR schemes, which require a light additional computational effort.
We prove our implementation with both a full batch gradient (i.e. L-BFGS) and an implementation of IG/RR methods, proving that algorithms require a similar computational effort.
arXiv Detail & Related papers (2022-12-04T15:26:36Z) - Efficient algorithms for implementing incremental proximal-point methods [0.3263412255491401]
In machine learning, model training algorithms observe a small portion of the training set in each computational step.
Several streams of research attempt to exploit more information about the cost functions than just their gradients via the well-known proximal operators.
We devise a novel algorithmic framework, which exploits convex duality theory to achieve both algorithmic efficiency and software modularity of proximal operator.
arXiv Detail & Related papers (2022-05-03T12:43:26Z) - Breaking the Linear Iteration Cost Barrier for Some Well-known
Conditional Gradient Methods Using MaxIP Data-structures [49.73889315176884]
Conditional gradient methods (CGM) are widely used in modern machine learning.
Most efforts focus on reducing the number of iterations as a means to reduce the overall running time.
We show the first algorithm, where the cost per iteration is sublinear in the number of parameters, for many fundamental optimization algorithms.
arXiv Detail & Related papers (2021-11-30T05:40:14Z) - Efficient ADMM-based Algorithms for Convolutional Sparse Coding [38.31173467674558]
This letter presents a solution to a convolutional least-squares fitting subproblem.
We also use the same approach for developing an efficient convolutional dictionary learning method.
We propose a novel algorithm for convolutional sparse coding with a constraint on the approximation error.
arXiv Detail & Related papers (2021-09-07T09:49:10Z) - Provable Benefits of Actor-Critic Methods for Offline Reinforcement
Learning [85.50033812217254]
Actor-critic methods are widely used in offline reinforcement learning practice, but are not so well-understood theoretically.
We propose a new offline actor-critic algorithm that naturally incorporates the pessimism principle.
arXiv Detail & Related papers (2021-08-19T17:27:29Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.