Can Transformers Break Encryption Schemes via In-Context Learning?
- URL: http://arxiv.org/abs/2508.10235v1
- Date: Wed, 13 Aug 2025 23:09:32 GMT
- Title: Can Transformers Break Encryption Schemes via In-Context Learning?
- Authors: Jathin Korrapati, Patrick Mendoza, Aditya Tomar, Abein Abraham,
- Abstract summary: In-context learning (ICL) has emerged as a powerful capability of transformer-based language models.<n>We propose a novel application of ICL into the domain of cryptographic function learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In-context learning (ICL) has emerged as a powerful capability of transformer-based language models, enabling them to perform tasks by conditioning on a small number of examples presented at inference time, without any parameter updates. Prior work has shown that transformers can generalize over simple function classes like linear functions, decision trees, even neural networks, purely from context, focusing on numerical or symbolic reasoning over underlying well-structured functions. Instead, we propose a novel application of ICL into the domain of cryptographic function learning, specifically focusing on ciphers such as mono-alphabetic substitution and Vigen\`ere ciphers, two classes of private-key encryption schemes. These ciphers involve a fixed but hidden bijective mapping between plain text and cipher text characters. Given a small set of (cipher text, plain text) pairs, the goal is for the model to infer the underlying substitution and decode a new cipher text word. This setting poses a structured inference challenge, which is well-suited for evaluating the inductive biases and generalization capabilities of transformers under the ICL paradigm. Code is available at https://github.com/adistomar/CS182-project.
Related papers
- ALICE: An Interpretable Neural Architecture for Generalization in Substitution Ciphers [0.3403377445166164]
We present cryptogram solving as an ideal testbed for studying neural network reasoning and generalization.<n>We develop ALICE, a simple encoder-only Transformer that sets a new state-of-the-art for both accuracy and speed on this decryption problem.<n>Surprisingly, ALICE generalizes to unseen ciphers after training on only $sim1500$ unique ciphers.
arXiv Detail & Related papers (2025-09-08T23:33:53Z) - PaTH Attention: Position Encoding via Accumulating Householder Transformations [56.32365080761523]
PaTH is a flexible data-dependent position encoding scheme based on accumulated products of Householder transformations.<n>We derive an efficient parallel algorithm for training through exploiting a compact representation of products of Householder matrices.
arXiv Detail & Related papers (2025-05-22T08:36:09Z) - Compile-Time Fully Homomorphic Encryption of Vectors: Eliminating Online Encryption via Algebraic Basis Synthesis [1.3824176915623292]
ciphertexts are constructed from precomputed encrypted basis vectors combined with a runtime-scaled encryption of zero.<n>We formalize the method as a randomized $mathbbZ_t$- module morphism and prove that it satisfies IND-CPA security under standard assumptions.<n>Unlike prior designs that require a pool of random encryptions of zero, our construction achieves equivalent security using a single zero ciphertext multiplied by a fresh scalar at runtime.
arXiv Detail & Related papers (2025-05-19T00:05:18Z) - ICL CIPHERS: Quantifying "Learning'' in In-Context Learning via Substitution Ciphers [20.65223270978325]
We introduce ICL CIPHERS, a class of task reformulations based on substitution ciphers borrowed from classic cryptography.<n>In this approach, a subset of tokens in the in-context inputs are substituted with other (irrelevant) tokens, rendering English sentences less comprehensible to human eye.<n>We show that LLMs are better at solving ICL CIPHERS with BIJECTIVE mappings than the NON-BIJECTIVE (irreversible) baseline.
arXiv Detail & Related papers (2025-04-28T00:05:29Z) - Cryptanalysis on Lightweight Verifiable Homomorphic Encryption [7.059472280274008]
Verifiable Homomorphic Encryption (VHE) is a cryptographic technique that integrates Homomorphic Encryption (HE) with Verifiable Computation (VC)<n>It serves as a crucial technology for ensuring both privacy and integrity in outsourced computation.<n>This paper presents efficient attacks that exploit the homomorphic properties of encryption schemes.
arXiv Detail & Related papers (2025-02-18T08:13:10Z) - Algorithmic Capabilities of Random Transformers [49.73113518329544]
We investigate what functions can be learned by randomly transformers in which only the embedding layers are optimized.
We find that these random transformers can perform a wide range of meaningful algorithmic tasks.
Our results indicate that some algorithmic capabilities are present in transformers even before these models are trained.
arXiv Detail & Related papers (2024-10-06T06:04:23Z) - GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher [85.18213923151717]
Experimental results show certain ciphers succeed almost 100% of the time to bypass the safety alignment of GPT-4 in several safety domains.
We propose a novel SelfCipher that uses only role play and several demonstrations in natural language to evoke this capability.
arXiv Detail & Related papers (2023-08-12T04:05:57Z) - CipherSniffer: Classifying Cipher Types [0.0]
We frame the decryption task as a classification problem.
We first create a dataset of transpositions, substitutions, text reversals, word reversals, sentence shifts, and unencrypted text.
arXiv Detail & Related papers (2023-06-13T20:18:24Z) - Memorization for Good: Encryption with Autoregressive Language Models [8.645826579841692]
We propose the first symmetric encryption algorithm with autoregressive language models (SELM)
We show that autoregressive LMs can encode arbitrary data into a compact real-valued vector (i.e., encryption) and then losslessly decode the vector to the original message (i.e. decryption) via random subspace optimization and greedy decoding.
arXiv Detail & Related papers (2023-05-15T05:42:34Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - Sentence Bottleneck Autoencoders from Transformer Language Models [53.350633961266375]
We build a sentence-level autoencoder from a pretrained, frozen transformer language model.
We adapt the masked language modeling objective as a generative, denoising one, while only training a sentence bottleneck and a single-layer modified transformer decoder.
We demonstrate that the sentence representations discovered by our model achieve better quality than previous methods that extract representations from pretrained transformers on text similarity tasks, style transfer, and single-sentence classification tasks in the GLUE benchmark, while using fewer parameters than large pretrained models.
arXiv Detail & Related papers (2021-08-31T19:39:55Z) - Cross-Thought for Sentence Encoder Pre-training [89.32270059777025]
Cross-Thought is a novel approach to pre-training sequence encoder.
We train a Transformer-based sequence encoder over a large set of short sequences.
Experiments on question answering and textual entailment tasks demonstrate that our pre-trained encoder can outperform state-of-the-art encoders.
arXiv Detail & Related papers (2020-10-07T21:02:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.