Cryptographic Compression
- URL: http://arxiv.org/abs/2501.16184v1
- Date: Mon, 27 Jan 2025 16:32:08 GMT
- Title: Cryptographic Compression
- Authors: Joshua Cooper, Grant Fickes,
- Abstract summary: We introduce a protocol called ENCORE which simultaneously compresses and encrypts data in a one-pass process.
We show that these can be done simultaneously, at least for typical'' data with a stable distribution, approximated reasonably well by the output of a Markov model.
The strategy is to transform the data into a dyadic distribution whose Huffman encoding is close to uniform, and then store the transformations made to said data in a compressed secondary stream.
- Score: 0.8057006406834466
- License:
- Abstract: We introduce a protocol called ENCORE which simultaneously compresses and encrypts data in a one-pass process that can be implemented efficiently and possesses a number of desirable features as a streaming encoder/decoder. Motivated by the observation that both lossless compression and encryption consist of performing an invertible transformation whose output is close to a uniform distribution over bit streams, we show that these can be done simultaneously, at least for ``typical'' data with a stable distribution, i.e., approximated reasonably well by the output of a Markov model. The strategy is to transform the data into a dyadic distribution whose Huffman encoding is close to uniform, and then store the transformations made to said data in a compressed secondary stream interwoven into the first with a user-defined encryption protocol. The result is an encoding which we show exhibits a modified version of Yao's ``next-bit test'' while requiring many fewer bits of entropy than standard encryption. Numerous open questions remain, particularly regarding results that we suspect can be strengthened considerably.
Related papers
- Threshold Selection for Iterative Decoding of $(v,w)$-regular Binary Codes [84.0257274213152]
Iterative bit flipping decoders are an efficient choice for sparse $(v,w)$-regular codes.
We propose concrete criteria for threshold determination, backed by a closed form model.
arXiv Detail & Related papers (2025-01-23T17:38:22Z) - Learned Compression of Encoding Distributions [1.4732811715354455]
entropy bottleneck is a common component used in many learned compression models.
We propose a method that adapts the encoding distribution to match the latent data distribution for a specific input.
Our method achieves a Bjontegaard-Delta (BD)-rate gain of -7.10% on the Kodak test dataset.
arXiv Detail & Related papers (2024-06-18T21:05:51Z) - Lossy Compression with Gaussian Diffusion [28.930398810600504]
We describe a novel lossy compression approach called DiffC which is based on unconditional diffusion generative models.
We implement a proof of concept and find that it works surprisingly well despite the lack of an encoder transform.
We show that a flow-based reconstruction achieves a 3 dB gain over ancestral sampling at highs.
arXiv Detail & Related papers (2022-06-17T16:46:31Z) - End-to-end optimized image compression with competition of prior
distributions [29.585370305561582]
We propose a compression scheme that uses a single convolutional autoencoder and multiple learned prior distributions.
Our method offers rate-distortion performance comparable to that obtained with a predicted parametrized prior.
arXiv Detail & Related papers (2021-11-17T15:04:01Z) - End-to-End Image Compression with Probabilistic Decoding [31.38636002751645]
We propose a learned image compression framework to support probabilistic decoding.
The proposed framework is dependent on a revertible neural network-based transform to convert pixels into coefficients.
arXiv Detail & Related papers (2021-09-30T04:07:09Z) - Dense Coding with Locality Restriction for Decoder: Quantum Encoders vs.
Super-Quantum Encoders [67.12391801199688]
We investigate dense coding by imposing various locality restrictions to our decoder.
In this task, the sender Alice and the receiver Bob share an entangled state.
arXiv Detail & Related papers (2021-09-26T07:29:54Z) - Text Compression-aided Transformer Encoding [77.16960983003271]
We propose explicit and implicit text compression approaches to enhance the Transformer encoding.
backbone information, meaning the gist of the input text, is not specifically focused on.
Our evaluation on benchmark datasets shows that the proposed explicit and implicit text compression approaches improve results in comparison to strong baselines.
arXiv Detail & Related papers (2021-02-11T11:28:39Z) - Unfolding Neural Networks for Compressive Multichannel Blind
Deconvolution [71.29848468762789]
We propose a learned-structured unfolding neural network for the problem of compressive sparse multichannel blind-deconvolution.
In this problem, each channel's measurements are given as convolution of a common source signal and sparse filter.
We demonstrate that our method is superior to classical structured compressive sparse multichannel blind-deconvolution methods in terms of accuracy and speed of sparse filter recovery.
arXiv Detail & Related papers (2020-10-22T02:34:33Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - On Sparsifying Encoder Outputs in Sequence-to-Sequence Models [90.58793284654692]
We take Transformer as the testbed and introduce a layer of gates in-between the encoder and the decoder.
The gates are regularized using the expected value of the sparsity-inducing L0penalty.
We investigate the effects of this sparsification on two machine translation and two summarization tasks.
arXiv Detail & Related papers (2020-04-24T16:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.