Transformers -- Messages in Disguise
- URL: http://arxiv.org/abs/2411.10287v1
- Date: Fri, 15 Nov 2024 15:42:29 GMT
- Title: Transformers -- Messages in Disguise
- Authors: Joshua H. Tyler, Mohamed K. M. Fadul, Donald R. Reising,
- Abstract summary: NN based cryptography is being investigated due to its ability to learn and implement random cryptographic schemes.
R is comprised of three new NN layers: the (i) projection layer, (ii) inverse projection layer, and (iii) dot-product layer.
This results in an ANC network that (i) is computationally efficient, (ii) ensures the encrypted message is unique, and (iii) does not induce any communication overhead.
- Score: 3.74142789780782
- License:
- Abstract: Modern cryptography, such as Rivest Shamir Adleman (RSA) and Secure Hash Algorithm (SHA), has been designed by humans based on our understanding of cryptographic methods. Neural Network (NN) based cryptography is being investigated due to its ability to learn and implement random cryptographic schemes that may be harder to decipher than human-designed algorithms. NN based cryptography may create a new cryptographic scheme that is NN specific and that changes every time the NN is (re)trained. This is attractive since it would require an adversary to restart its process(es) to learn or break the cryptographic scheme every time the NN is (re)trained. Current challenges facing NN-based encryption include additional communication overhead due to encoding to correct bit errors, quantizing the continuous-valued output of the NN, and enabling One-Time-Pad encryption. With this in mind, the Random Adversarial Data Obfuscation Model (RANDOM) Adversarial Neural Cryptography (ANC) network is introduced. RANDOM is comprised of three new NN layers: the (i) projection layer, (ii) inverse projection layer, and (iii) dot-product layer. This results in an ANC network that (i) is computationally efficient, (ii) ensures the encrypted message is unique to the encryption key, and (iii) does not induce any communication overhead. RANDOM only requires around 100 KB to store and can provide up to 2.5 megabytes per second of end-to-end encrypted communication.
Related papers
- Bi-CryptoNets: Leveraging Different-Level Privacy for Encrypted
Inference [4.754973569457509]
We decompose the input data into sensitive and insensitive segments according to importance and privacy.
The sensitive segment includes some important and private information such as human faces.
We take strong homomorphic encryption to keep security, whereas the insensitive one contains some background and we add perturbations.
arXiv Detail & Related papers (2024-02-02T10:35:05Z) - Generating Interpretable Networks using Hypernetworks [16.876961991785507]
We explore the possibility of using hypernetworks to generate interpretable networks whose underlying algorithms are not yet known.
For the task of computing L1 norms, hypernetworks find three algorithms: (a) the double-sided algorithm, (b) the convexity algorithm, (c) the pudding algorithm.
We show that a trained hypernetwork can correctly construct models for input dimensions not seen in training, demonstrating systematic generalization.
arXiv Detail & Related papers (2023-12-05T18:55:32Z) - Revocable Cryptography from Learning with Errors [61.470151825577034]
We build on the no-cloning principle of quantum mechanics and design cryptographic schemes with key-revocation capabilities.
We consider schemes where secret keys are represented as quantum states with the guarantee that, once the secret key is successfully revoked from a user, they no longer have the ability to perform the same functionality as before.
arXiv Detail & Related papers (2023-02-28T18:58:11Z) - Deep Neural Networks for Encrypted Inference with TFHE [0.0]
Fully homomorphic encryption (FHE) is an encryption method that allows to perform computation on encrypted data, without decryption.
TFHE preserves the privacy of the users of online services that handle sensitive data, such as health data, biometrics, credit scores and other personal information.
We show how to construct Deep Neural Networks (DNNs) that are compatible with the constraints of TFHE, an FHE scheme that allows arbitrary depth computation circuits.
arXiv Detail & Related papers (2023-02-13T09:53:31Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z) - Enhancing Networking Cipher Algorithms with Natural Language [0.0]
Natural language processing is considered as the weakest link in a networking encryption model.
This paper summarizes how languages can be integrated into symmetric encryption as a way to assist in the encryption of vulnerable streams.
arXiv Detail & Related papers (2022-06-22T09:05:52Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - NeuraCrypt: Hiding Private Health Data via Random Neural Networks for
Public Training [64.54200987493573]
We propose NeuraCrypt, a private encoding scheme based on random deep neural networks.
NeuraCrypt encodes raw patient data using a randomly constructed neural network known only to the data-owner.
We show that NeuraCrypt achieves competitive accuracy to non-private baselines on a variety of x-ray tasks.
arXiv Detail & Related papers (2021-06-04T13:42:21Z) - FFConv: Fast Factorized Neural Network Inference on Encrypted Data [9.868787266501036]
We propose a low-rank factorization method called FFConv to unify convolution and ciphertext packing.
Compared to prior art LoLa and Falcon, our method reduces the inference latency by up to 87% and 12%, respectively.
arXiv Detail & Related papers (2021-02-06T03:10:13Z) - Faster Secure Data Mining via Distributed Homomorphic Encryption [108.77460689459247]
Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
We propose a novel general distributed HE-based data mining framework towards one step of solving the scaling problem.
We verify the efficiency and effectiveness of our new framework by testing over various data mining algorithms and benchmark data-sets.
arXiv Detail & Related papers (2020-06-17T18:14:30Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.