FreStega: A Plug-and-Play Method for Boosting Imperceptibility and Capacity in Generative Linguistic Steganography for Real-World Scenarios
- URL: http://arxiv.org/abs/2412.19652v2
- Date: Mon, 30 Dec 2024 07:49:13 GMT
- Title: FreStega: A Plug-and-Play Method for Boosting Imperceptibility and Capacity in Generative Linguistic Steganography for Real-World Scenarios
- Authors: Kaiyi Pang,
- Abstract summary: Linguistic steganography embeds secret information in seemingly innocent texts, safeguarding privacy in surveillance environments.
We propose FreStega, a plug-and-play method to reconstruct the distribution of language models used for generative linguistic steganography.
- Score: 0.0
- License:
- Abstract: Linguistic steganography embeds secret information in seemingly innocent texts, safeguarding privacy in surveillance environments. Generative linguistic steganography leverages the probability distribution of language models (LMs) and applies steganographic algorithms to generate stego tokens, gaining attention with recent Large Language Model (LLM) advancements. To enhance security, researchers develop distribution-preserving stego algorithms to minimize the gap between stego sampling and LM sampling. However, the reliance on language model distributions, coupled with deviations from real-world cover texts, results in insufficient imperceptibility when facing steganalysis detectors in real-world scenarios. Moreover, LLM distributions tend to be more deterministic, resulting in reduced entropy and, consequently, lower embedding capacity. In this paper, we propose FreStega, a plug-and-play method to reconstruct the distribution of language models used for generative linguistic steganography. FreStega dynamically adjusts token probabilities from the language model at each step of stegotext auto-regressive generation, leveraging both sequential and spatial dimensions. In sequential adjustment, the temperature is dynamically adjusted based on instantaneous entropy, enhancing the diversity of stego texts and boosting embedding capacity. In the spatial dimension, the distribution is aligned with guidance from the target domain corpus, closely mimicking real cover text in the target domain. By reforming the distribution, FreStega enhances the imperceptibility of stego text in practical scenarios and improves steganographic capacity by 15.41\%, all without compromising the quality of the generated text. FreStega serves as a plug-and-play remedy to enhance the imperceptibility and embedding capacity of existing distribution-preserving steganography methods in real-world scenarios.
Related papers
- Shifting-Merging: Secure, High-Capacity and Efficient Steganography via Large Language Models [25.52890764952079]
steganography offers a way to securely hide messages within innocent-looking texts.
Large Language Models (LLMs) provide high-quality and explicit distribution.
ShiMer pseudorandomly shifts the probability interval of the LLM's distribution to obtain a private distribution.
arXiv Detail & Related papers (2025-01-01T09:51:15Z) - Semantic Steganography: A Framework for Robust and High-Capacity Information Hiding using Large Language Models [25.52890764952079]
generative linguistic steganography has become a prevalent technique for hiding information within model-generated texts.
We propose a semantic steganography framework based on Large Language Models (LLMs)
This framework offers robustness and reliability for transmission in complex channels, as well as resistance to text rendering and word blocking.
arXiv Detail & Related papers (2024-12-15T04:04:23Z) - Detecting Machine-Generated Long-Form Content with Latent-Space Variables [54.07946647012579]
Existing zero-shot detectors primarily focus on token-level distributions, which are vulnerable to real-world domain shifts.
We propose a more robust method that incorporates abstract elements, such as event transitions, as key deciding factors to detect machine versus human texts.
arXiv Detail & Related papers (2024-10-04T18:42:09Z) - Towards Next-Generation Steganalysis: LLMs Unleash the Power of Detecting Steganography [18.7168443402118]
Linguistic steganography provides convenient implementation to hide messages, particularly with the emergence of AI generation technology.
Existing methods are limited to finding distribution differences between steganographic texts and normal texts from the aspect of symbolic statistics.
This paper propose to employ human-like text processing abilities of large language models (LLMs) to realize the difference from the aspect of human perception.
arXiv Detail & Related papers (2024-05-15T04:52:09Z) - Language Model Decoding as Direct Metrics Optimization [87.68281625776282]
Current decoding methods struggle to generate texts that align with human texts across different aspects.
In this work, we frame decoding from a language model as an optimization problem with the goal of strictly matching the expected performance with human texts.
We prove that this induced distribution is guaranteed to improve the perplexity on human texts, which suggests a better approximation to the underlying distribution of human texts.
arXiv Detail & Related papers (2023-10-02T09:35:27Z) - A Cheaper and Better Diffusion Language Model with Soft-Masked Noise [62.719656543880596]
Masked-Diffuse LM is a novel diffusion model for language modeling, inspired by linguistic features in languages.
Specifically, we design a linguistic-informed forward process which adds corruptions to the text through strategically soft-masking to better noise the textual data.
We demonstrate that our Masked-Diffuse LM can achieve better generation quality than the state-of-the-art diffusion models with better efficiency.
arXiv Detail & Related papers (2023-04-10T17:58:42Z) - Provably Secure Generative Linguistic Steganography [29.919406917681282]
We present a novel provably secure generative linguistic steganographic method ADG.
ADG embeds secret information by Adaptive Dynamic Grouping of tokens according to their probability given by an off-the-shelf language model.
arXiv Detail & Related papers (2021-06-03T17:27:10Z) - GTAE: Graph-Transformer based Auto-Encoders for Linguistic-Constrained
Text Style Transfer [119.70961704127157]
Non-parallel text style transfer has attracted increasing research interests in recent years.
Current approaches still lack the ability to preserve the content and even logic of original sentences.
We propose a method called Graph Transformer based Auto-GTAE, which models a sentence as a linguistic graph and performs feature extraction and style transfer at the graph level.
arXiv Detail & Related papers (2021-02-01T11:08:45Z) - Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language
Model [58.27176041092891]
Recent research indicates that pretraining cross-lingual language models on large-scale unlabeled texts yields significant performance improvements.
We propose a novel unsupervised feature decomposition method that can automatically extract domain-specific features from the entangled pretrained cross-lingual representations.
Our proposed model leverages mutual information estimation to decompose the representations computed by a cross-lingual model into domain-invariant and domain-specific parts.
arXiv Detail & Related papers (2020-11-23T16:00:42Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.