An LLM Framework For Cryptography Over Chat Channels
- URL: http://arxiv.org/abs/2504.08871v1
- Date: Fri, 11 Apr 2025 11:34:14 GMT
- Title: An LLM Framework For Cryptography Over Chat Channels
- Authors: Danilo Gligoroski, Mayank Raikwar, Sonu Kumar Jha,
- Abstract summary: Governments all over the world are proposing legislation to detect, backdoor, or even ban encrypted communication.<n>We propose a novel cryptographic embedding framework that enables covert Public Key or Symmetric Key encrypted communication over public chat channels with humanlike produced texts.
- Score: 0.13108652488669734
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent advancements in Large Language Models (LLMs) have transformed communication, yet their role in secure messaging remains underexplored, especially in surveillance-heavy environments. At the same time, many governments all over the world are proposing legislation to detect, backdoor, or even ban encrypted communication. That emphasizes the need for alternative ways to communicate securely and covertly over open channels. We propose a novel cryptographic embedding framework that enables covert Public Key or Symmetric Key encrypted communication over public chat channels with humanlike produced texts. Some unique properties of our framework are: 1. It is LLM agnostic, i.e., it allows participants to use different local LLM models independently; 2. It is pre- or post-quantum agnostic; 3. It ensures indistinguishability from human-like chat-produced texts. Thus, it offers a viable alternative where traditional encryption is detectable and restricted.
Related papers
- Robust Steganography from Large Language Models [1.5749416770494704]
We study the problem of robust steganography.<n>We design and implement our steganographic schemes that embed arbitrary secret messages into natural language text.
arXiv Detail & Related papers (2025-04-11T21:06:36Z) - Secure Semantic Communication With Homomorphic Encryption [52.5344514499035]
This paper explores the feasibility of applying homomorphic encryption to SemCom.<n>We propose a task-oriented SemCom scheme secured through homomorphic encryption.
arXiv Detail & Related papers (2025-01-17T13:26:14Z) - $$\mathbf{L^2\cdot M = C^2}$$ Large Language Models are Covert Channels [11.002271137347295]
Large Language Models (LLMs) have gained significant popularity recently.
LLMs are susceptible to various attacks but can also improve the security of diverse systems.
How well do open source LLMs behave as covertext to, e.g., facilitate censorship-resistant communication?
arXiv Detail & Related papers (2024-05-24T15:47:35Z) - Silent Guardian: Protecting Text from Malicious Exploitation by Large Language Models [63.91178922306669]
We introduce Silent Guardian, a text protection mechanism against large language models (LLMs)
By carefully modifying the text to be protected, TPE can induce LLMs to first sample the end token, thus directly terminating the interaction.
We show that SG can effectively protect the target text under various configurations and achieve almost 100% protection success rate in some cases.
arXiv Detail & Related papers (2023-12-15T10:30:36Z) - Federated Learning is Better with Non-Homomorphic Encryption [1.4110007887109783]
Federated Learning (FL) offers a paradigm that empowers distributed AI model training without collecting raw data.
One of the popular methodologies is employing Homomorphic Encryption (HE)
We propose an innovative framework that synergizes permutation-based compressors with Classical Cryptography.
arXiv Detail & Related papers (2023-12-04T17:37:41Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher [85.18213923151717]
Experimental results show certain ciphers succeed almost 100% of the time to bypass the safety alignment of GPT-4 in several safety domains.
We propose a novel SelfCipher that uses only role play and several demonstrations in natural language to evoke this capability.
arXiv Detail & Related papers (2023-08-12T04:05:57Z) - Leveraging Generative Models for Covert Messaging: Challenges and Tradeoffs for "Dead-Drop" Deployments [10.423657458233713]
generative models of natural language text encode message-carrying bits into a sequence of samples from the model, ultimately yielding a plausible natural language covertext.
We make these challenges concrete, by considering the natural application of such a pipeline: namely, "dead-drop" covert messaging over large, public internet platforms.
We implement a system around this model-based format-transforming encryption pipeline, and give an empirical analysis of its performance and (heuristic) security.
arXiv Detail & Related papers (2021-10-13T20:05:26Z) - Quasi-Equivalence Discovery for Zero-Shot Emergent Communication [63.175848843466845]
We present a novel problem setting and the Quasi-Equivalence Discovery algorithm that allows for zero-shot coordination (ZSC)
We show that these two factors lead to unique optimal ZSC policies in referential games.
QED can iteratively discover the symmetries in this setting and converges to the optimal ZSC policy.
arXiv Detail & Related papers (2021-03-14T23:42:37Z) - Differential Privacy and Natural Language Processing to Generate
Contextually Similar Decoy Messages in Honey Encryption Scheme [0.0]
Honey Encryption is an approach to encrypt the messages using low min-entropy keys, such as weak passwords, OTPs, PINs, credit card numbers.
The ciphertext is produces, when decrypted with any number of incorrect keys, produces plausible-looking but bogus plaintext called "honey messages"
A gibberish, random assortment of words is not enough to fool an attacker; that will not be acceptable and convincing, whether or not the attacker knows some information of the genuine source.
arXiv Detail & Related papers (2020-10-29T23:02:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.