Optimized Homomorphic Permutation From New Permutation Decomposition Techniques
- URL: http://arxiv.org/abs/2410.21840v4
- Date: Sat, 30 Nov 2024 00:40:46 GMT
- Title: Optimized Homomorphic Permutation From New Permutation Decomposition Techniques
- Authors: Xirong Ma, Junling Fang, Dung Hoang Duong, Yali Jiang, Chunpeng Ge, Yanbin Li,
- Abstract summary: Homomorphic permutation is fundamental to privacy-preserving computations based on batch-encoding homomorphic encryption.<n>In this paper, we enhance the efficiency of homomorphic permutations through novel decomposition techniques.
- Score: 1.8911961520222993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Homomorphic permutation is fundamental to privacy-preserving computations based on batch-encoding homomorphic encryption. It underpins nearly all homomorphic matrix operation algorithms and predominantly influences their complexity. Permutation decomposition as a potential approach to optimize this critical component remains underexplored. In this paper, we enhance the efficiency of homomorphic permutations through novel decomposition techniques, advancing homomorphic encryption-based privacy-preserving computations. We start by estimating the ideal effect of decompositions on permutations, then propose an algorithm that searches depth-1 ideal decomposition solutions. This helps us ascertain the full-depth ideal decomposability of permutations used in specific secure matrix transposition and multiplication schemes, allowing them to achieve asymptotic improvement in speed and rotation key reduction. We further devise a new method for computing arbitrary homomorphic permutations, considering that permutations with weak structures are unlikely to be ideally factorized. Our design deviates from the conventional scope of decomposition. But it better approximates the ideal effect of decomposition we define than the state-of-the-art techniques, with a speed-up of up to $\times 2.27$ under minimal rotation key requirements.
Related papers
- Parseval Convolution Operators and Neural Networks [16.78532039510369]
We first identify the Parseval convolution operators as the class of energy-preserving filterbanks.
We then present a constructive approach for the design/specification of such filterbanks via the chaining of elementary Parseval modules.
We demonstrate the usage of those tools with the design of a CNN-based algorithm for the iterative reconstruction of biomedical images.
arXiv Detail & Related papers (2024-08-19T13:31:16Z) - Strategies for optimizing double-bracket quantum algorithms [0.050257374758179374]
We present strategies to optimize the choice of the double-bracket evolutions.
We also present a selection of diagonal evolution parametrizations that can be directly compiled into CNOTs and single-qubit rotation gates.
arXiv Detail & Related papers (2024-08-14T10:07:54Z) - SequentialAttention++ for Block Sparsification: Differentiable Pruning
Meets Combinatorial Optimization [24.55623897747344]
Neural network pruning is a key technique towards engineering large yet scalable, interpretable, generalizable models.
We show how many existing differentiable pruning techniques can be understood as non regularization for group sparse optimization.
We propose SequentialAttention++, which advances state the art in large-scale neural network block-wise pruning tasks on the ImageNet and Criteo datasets.
arXiv Detail & Related papers (2024-02-27T21:42:18Z) - ADMM-MM Algorithm for General Tensor Decomposition [7.0326155922512275]
The proposed algorithm supports three basic loss functions ($ell$-loss, $ell$-loss and KL divergence) and various low-rank tensor decomposition models (CP, Tucker, TT, and TR decompositions)
We show that wide-range applications can be solved by the proposed algorithm, and can be easily extended to any established tensor decomposition models in a plug-and-play manner.
arXiv Detail & Related papers (2023-12-19T00:17:34Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - A Novel and Optimal Spectral Method for Permutation Synchronization [8.517406772939292]
Permutation synchronization is an important problem in computer science that constitutes the key step of many computer vision tasks.
In this paper, we propose a novel and statistically optimal spectral algorithm.
arXiv Detail & Related papers (2023-03-21T17:43:26Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - A doubly stochastic matrices-based approach to optimal qubit routing [0.0]
Swap mapping is a quantum compiler optimization that, by SWAP gates, maps a logical quantum circuit to an equivalent physically implementable one.
In this work, we employ a structure called doubly convex matrix, which is defined as a combination of permutation matrices.
We show that the proposed algorithm, at the cost of additional time, can deliver significant depth reduction when compared to the state of the art algorithm SABRE.
arXiv Detail & Related papers (2022-11-14T09:25:35Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Batch Bayesian Optimization on Permutations using Acquisition Weighted
Kernels [86.11176756341114]
We introduce LAW, a new efficient batch acquisition method based on the determinantal point process.
We provide a regret analysis for our method to gain insight in its theoretical properties.
We evaluate the method on several standard problems involving permutations such as quadratic assignment.
arXiv Detail & Related papers (2021-02-26T10:15:57Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.