New Permutation Decomposition Techniques for Efficient Homomorphic Permutation
- URL: http://arxiv.org/abs/2410.21840v7
- Date: Tue, 21 Oct 2025 00:06:30 GMT
- Title: New Permutation Decomposition Techniques for Efficient Homomorphic Permutation
- Authors: Xirong Ma, Junling Fang, Chunpeng Ge, Dung Hoang Duong, Yali Jiang, Yanbin Li, Willy Susilo, Lizhen Cui,
- Abstract summary: Homomorphic permutation is fundamental to privacy-preserving computations based on batch-encoding homomorphic encryption.<n>We propose novel decomposition techniques to optimize homomorphic permutations, advancing homomorphic encryption-based privacy-preserving computations.<n>We design a network structure that deviates from the conventional scope of decomposition and outperforms the state-of-the-art technique with a speed-up of up to $1.69times$ under a minimal rotation key requirement.
- Score: 33.30918602217202
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Homomorphic permutation is fundamental to privacy-preserving computations based on batch-encoding homomorphic encryption. It underpins nearly all homomorphic matrix operations and predominantly influences their complexity. Permutation decomposition as a potential approach to optimize this critical component remains underexplored. In this paper, we propose novel decomposition techniques to optimize homomorphic permutations, advancing homomorphic encryption-based privacy-preserving computations. We start by defining an ideal decomposition form for permutations and propose an algorithm searching for depth-1 ideal decompositions. Based on this, we prove the full-depth ideal decomposability of permutations used in specific homomorphic matrix transposition (HMT) and multiplication (HMM) algorithms, allowing them to achieve asymptotic improvement in speed and rotation key reduction. As a demonstration of applicability, substituting the HMM components in the best-known inference framework of encrypted neural networks with our enhanced version shows up to $3.9\times$ reduction in latency. We further devise a new method for computing arbitrary homomorphic permutations, specifically those with weak structures that cannot be ideally decomposed. We design a network structure that deviates from the conventional scope of decomposition and outperforms the state-of-the-art technique with a speed-up of up to $1.69\times$ under a minimal rotation key requirement.
Related papers
- Learnable Permutation for Structured Sparsity on Transformer Models [17.777454274409912]
Structured sparsity has emerged as a popular model pruning technique.<n>Weight permutation is a promising direction to further improve post-pruning performance.<n>We propose a novel end-to-end learnable permutation framework.
arXiv Detail & Related papers (2026-01-30T13:44:00Z) - Cartan-Khaneja-Glaser decomposition of $\SU(2^n)$ via involutive automorphisms [0.46664938579243564]
We present a novel algorithm for performing the Cartan-Khaneja-Glaser decomposition of unitary matrices in $SU(2n)$.<n>We overcome key limitations of their method, such as reliance on ill-defined matrix logarithms and the convergence issues of truncated Baker-Campbell-Hausdorff(BCH) series.
arXiv Detail & Related papers (2025-09-05T19:46:50Z) - Merge Kernel for Bayesian Optimization on Permutation Space [0.0]
We propose a novel framework for generating kernel functions on permutation space based on sorting algorithms.<n>The proposed kernel consistently outperforms the state-of-the-art Mallows kernel across various permutation optimization benchmarks.
arXiv Detail & Related papers (2025-07-17T16:12:39Z) - Parseval Convolution Operators and Neural Networks [16.78532039510369]
We first identify the Parseval convolution operators as the class of energy-preserving filterbanks.
We then present a constructive approach for the design/specification of such filterbanks via the chaining of elementary Parseval modules.
We demonstrate the usage of those tools with the design of a CNN-based algorithm for the iterative reconstruction of biomedical images.
arXiv Detail & Related papers (2024-08-19T13:31:16Z) - Strategies for optimizing double-bracket quantum algorithms [0.050257374758179374]
We present strategies to optimize the choice of the double-bracket evolutions.
We also present a selection of diagonal evolution parametrizations that can be directly compiled into CNOTs and single-qubit rotation gates.
arXiv Detail & Related papers (2024-08-14T10:07:54Z) - SequentialAttention++ for Block Sparsification: Differentiable Pruning
Meets Combinatorial Optimization [24.55623897747344]
Neural network pruning is a key technique towards engineering large yet scalable, interpretable, generalizable models.
We show how many existing differentiable pruning techniques can be understood as non regularization for group sparse optimization.
We propose SequentialAttention++, which advances state the art in large-scale neural network block-wise pruning tasks on the ImageNet and Criteo datasets.
arXiv Detail & Related papers (2024-02-27T21:42:18Z) - ADMM-MM Algorithm for General Tensor Decomposition [7.0326155922512275]
The proposed algorithm supports three basic loss functions ($ell$-loss, $ell$-loss and KL divergence) and various low-rank tensor decomposition models (CP, Tucker, TT, and TR decompositions)
We show that wide-range applications can be solved by the proposed algorithm, and can be easily extended to any established tensor decomposition models in a plug-and-play manner.
arXiv Detail & Related papers (2023-12-19T00:17:34Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - A Novel and Optimal Spectral Method for Permutation Synchronization [8.517406772939292]
Permutation synchronization is an important problem in computer science that constitutes the key step of many computer vision tasks.
In this paper, we propose a novel and statistically optimal spectral algorithm.
arXiv Detail & Related papers (2023-03-21T17:43:26Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - A doubly stochastic matrices-based approach to optimal qubit routing [0.0]
Swap mapping is a quantum compiler optimization that, by SWAP gates, maps a logical quantum circuit to an equivalent physically implementable one.
In this work, we employ a structure called doubly convex matrix, which is defined as a combination of permutation matrices.
We show that the proposed algorithm, at the cost of additional time, can deliver significant depth reduction when compared to the state of the art algorithm SABRE.
arXiv Detail & Related papers (2022-11-14T09:25:35Z) - Log-based Sparse Nonnegative Matrix Factorization for Data
Representation [55.72494900138061]
Nonnegative matrix factorization (NMF) has been widely studied in recent years due to its effectiveness in representing nonnegative data with parts-based representations.
We propose a new NMF method with log-norm imposed on the factor matrices to enhance the sparseness.
A novel column-wisely sparse norm, named $ell_2,log$-(pseudo) norm, is proposed to enhance the robustness of the proposed method.
arXiv Detail & Related papers (2022-04-22T11:38:10Z) - Sparse Quadratic Optimisation over the Stiefel Manifold with Application
to Permutation Synchronisation [71.27989298860481]
We address the non- optimisation problem of finding a matrix on the Stiefel manifold that maximises a quadratic objective function.
We propose a simple yet effective sparsity-promoting algorithm for finding the dominant eigenspace matrix.
arXiv Detail & Related papers (2021-09-30T19:17:35Z) - Self-supervised Symmetric Nonnegative Matrix Factorization [82.59905231819685]
Symmetric nonnegative factor matrix (SNMF) has demonstrated to be a powerful method for data clustering.
Inspired by ensemble clustering that aims to seek better clustering results, we propose self-supervised SNMF (S$3$NMF)
We take advantage of the sensitivity to code characteristic of SNMF, without relying on any additional information.
arXiv Detail & Related papers (2021-03-02T12:47:40Z) - Batch Bayesian Optimization on Permutations using Acquisition Weighted
Kernels [86.11176756341114]
We introduce LAW, a new efficient batch acquisition method based on the determinantal point process.
We provide a regret analysis for our method to gain insight in its theoretical properties.
We evaluate the method on several standard problems involving permutations such as quadratic assignment.
arXiv Detail & Related papers (2021-02-26T10:15:57Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.