Volley Revolver: A Novel Matrix-Encoding Method for Privacy-Preserving Deep Learning (Inference++)
- URL: http://arxiv.org/abs/2512.18646v1
- Date: Sun, 21 Dec 2025 08:40:31 GMT
- Title: Volley Revolver: A Novel Matrix-Encoding Method for Privacy-Preserving Deep Learning (Inference++)
- Authors: John Chiang,
- Abstract summary: Homomorphic encryption has emerged as a promising approach for enabling secure machine learning in untrusted environments.<n>In this paper, we propose an improved encoding and computation framework that removes the requirement that a single encrypted ciphertext must fully contain one input image.<n>Our method reformulates the data layout and homomorphic operations to partition high-resolution inputs across multiple ciphertexts.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy-preserving inference of convolutional neural networks (CNNs) using homomorphic encryption has emerged as a promising approach for enabling secure machine learning in untrusted environments. In our previous work, we introduced a matrix-encoding strategy that allows convolution and matrix multiplication to be efficiently evaluated over encrypted data, enabling practical CNN inference without revealing either the input data or the model parameters. The core idea behind this strategy is to construct a three-dimensional representation within ciphertexts that preserves the intrinsic spatial structure of both input image data and model weights, rather than flattening them into conventional two-dimensional encodings. However, this approach can operate efficiently $only$ when the number of available plaintext slots within a ciphertext is sufficient to accommodate an entire input image, which becomes a critical bottleneck when processing high-resolution images. In this paper, we address this fundamental limitation by proposing an improved encoding and computation framework that removes the requirement that a single encrypted ciphertext must fully contain one input image. Our method reformulates the data layout and homomorphic operations to partition high-resolution inputs across multiple ciphertexts while preserving the algebraic structure required for efficient convolution and matrix multiplication. As a result, our approach enables privacy-preserving CNN inference to scale naturally beyond the slot-capacity constraints of prior methods, making homomorphic evaluation of CNNs practical for higher-resolution and more complex datasets.
Related papers
- Efficient Privacy-Preserving Sparse Matrix-Vector Multiplication Using Homomorphic Encryption [22.506475163181253]
homomorphic encryption (HE) has emerged as a leading approach for addressing this challenge.<n>This paper presents the first framework that efficiently integrates HE with sparse matrix multiplication (SpMV)<n>In particular, we introduce a novel compressed matrix format, named Compressed Sparse Sorted Column (CSSC), which is specifically designed to optimize encrypted sparse matrix computations.
arXiv Detail & Related papers (2026-03-05T02:29:50Z) - Knowledge-Informed Neural Network for Complex-Valued SAR Image Recognition [51.03674130115878]
We introduce the Knowledge-Informed Neural Network (KINN), a lightweight framework built upon a novel "compression-aggregation-compression" architecture.<n>KINN establishes a state-of-the-art in parameter-efficient recognition, offering exceptional generalization in data-scarce and out-of-distribution scenarios.
arXiv Detail & Related papers (2025-10-23T07:12:26Z) - Efficient Decoding Methods for Language Models on Encrypted Data [32.58944595512403]
Homomorphic encryption (HE) enables computation on encrypted data for secure inference.<n>Neural text generation requires decoding methods like argmax and sampling, which are non-polynomial and thus computationally expensive under encryption.<n>We introduce cutmax, an HE-friendly argmax algorithm that reduces cipher operations compared to prior methods, enabling practical greedy decoding under encryption.
arXiv Detail & Related papers (2025-09-10T08:23:14Z) - Compressive Meta-Learning [49.300635370079874]
Compressive learning is a framework that enables efficient processing by using random, non-linear features.<n>We propose a framework that meta-learns both the encoding and decoding stages of compressive learning methods.<n>We explore multiple applications -- including neural network-based compressive PCA, compressive ridge regression, compressive k-means, and autoencoders.
arXiv Detail & Related papers (2025-08-14T22:08:06Z) - CryptoUNets: Applying Convolutional Networks to Encrypted Data for Biomedical Image Segmentation [0.0]
We demonstrate the feasibility of a privacy-preserving U-Net deep learning inference framework, namely, homomorphic encryption-based U-Net inference.<n>To our knowledge, this is the first work to achieve support perform implement enable U-Net inference entirely based on homomorphic encryption.
arXiv Detail & Related papers (2025-04-30T11:37:22Z) - Cryptanalysis via Machine Learning Based Information Theoretic Metrics [58.96805474751668]
We propose two novel applications of machine learning (ML) algorithms to perform cryptanalysis on any cryptosystem.<n>These algorithms can be readily applied in an audit setting to evaluate the robustness of a cryptosystem.<n>We show that our classification model correctly identifies the encryption schemes that are not IND-CPA secure, such as DES, RSA, and AES ECB, with high accuracy.
arXiv Detail & Related papers (2025-01-25T04:53:36Z) - Encrypted Dynamic Control exploiting Limited Number of Multiplications and a Method using RLWE-based Cryptosystem [0.3749861135832073]
We present a method to encrypt dynamic controllers that can be implemented through most homomorphic encryption schemes.
As a result, the encrypted controller involves only a limited number of homomorphic multiplications on every encrypted data.
We propose a customization of the method for Ring Learning With Errors (RLWE)-based cryptosystems, where a vector of messages can be encrypted into a single ciphertext.
arXiv Detail & Related papers (2023-07-07T08:24:48Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - A Pixel-based Encryption Method for Privacy-Preserving Deep Learning
Models [5.749044590090683]
We propose an efficient pixel-based perceptual encryption method.
The method provides a necessary level of security while preserving the intrinsic properties of the original image.
Thereby, can enable deep learning (DL) applications in the encryption domain.
arXiv Detail & Related papers (2022-03-31T03:42:11Z) - Unsupervised Deep Cross-modality Spectral Hashing [65.3842441716661]
The framework is a two-step hashing approach which decouples the optimization into binary optimization and hashing function learning.
We propose a novel spectral embedding-based algorithm to simultaneously learn single-modality and binary cross-modality representations.
We leverage the powerful CNN for images and propose a CNN-based deep architecture to learn text modality.
arXiv Detail & Related papers (2020-08-01T09:20:11Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z) - Cryptotree: fast and accurate predictions on encrypted structured data [0.0]
Homomorphic Encryption (HE) is acknowledged for its ability to allow computation on encrypted data, where both the input and output are encrypted.
We propose Cryptotree, a framework that enables the use of Random Forests (RF), a very powerful learning procedure compared to linear regression.
arXiv Detail & Related papers (2020-06-15T11:48:01Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.