Provable Secure Steganography Based on Adaptive Dynamic Sampling
- URL: http://arxiv.org/abs/2504.12579v1
- Date: Thu, 17 Apr 2025 01:52:09 GMT
- Title: Provable Secure Steganography Based on Adaptive Dynamic Sampling
- Authors: Kaiyi Pang,
- Abstract summary: Provably Secure Steganography is state of the art for making stego carriers indistinguishable from normal ones.<n>Current PSS methods often require explicit access to the distribution of generative model for both sender and receiver.<n>We propose a provably secure steganography scheme that does not require access to explicit model distributions for both sender and receiver.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The security of private communication is increasingly at risk due to widespread surveillance. Steganography, a technique for embedding secret messages within innocuous carriers, enables covert communication over monitored channels. Provably Secure Steganography (PSS) is state of the art for making stego carriers indistinguishable from normal ones by ensuring computational indistinguishability between stego and cover distributions. However, current PSS methods often require explicit access to the distribution of generative model for both sender and receiver, limiting their practicality in black box scenarios. In this paper, we propose a provably secure steganography scheme that does not require access to explicit model distributions for both sender and receiver. Our method incorporates a dynamic sampling strategy, enabling generative models to embed secret messages within multiple sampling choices without disrupting the normal generation process of the model. Extensive evaluations of three real world datasets and three LLMs demonstrate that our blackbox method is comparable with existing white-box steganography methods in terms of efficiency and capacity while eliminating the degradation of steganography in model generated outputs.
Related papers
- PSyDUCK: Training-Free Steganography for Latent Diffusion [22.17835886086284]
PSyDUCK is a training-free, model-agnostic steganography framework specifically designed for latent diffusion models.<n>Our method dynamically adapts embedding strength to balance accuracy and detectability, significantly improving upon existing pixel-space approaches.
arXiv Detail & Related papers (2025-01-31T14:39:12Z) - Multichannel Steganography: A Provably Secure Hybrid Steganographic Model for Secure Communication [0.0]
This study introduces a novel steganographic model that synthesizes Steganography by Cover Modification (CMO) and Steganography by Cover Synthesis (CSY)<n>Building upon this model, a refined Steganographic Communication Protocol is proposed, enhancing resilience against sophisticated threats.<n>This study explores the practicality and adaptability of the model to both constrained environments like SMS banking and resource-rich settings such as blockchain transactions.
arXiv Detail & Related papers (2025-01-08T13:58:07Z) - Shifting-Merging: Secure, High-Capacity and Efficient Steganography via Large Language Models [25.52890764952079]
steganography offers a way to securely hide messages within innocent-looking texts.<n>Large Language Models (LLMs) provide high-quality and explicit distribution.<n>ShiMer pseudorandomly shifts the probability interval of the LLM's distribution to obtain a private distribution.
arXiv Detail & Related papers (2025-01-01T09:51:15Z) - Provably Robust and Secure Steganography in Asymmetric Resource Scenario [30.12327233257552]
Current provably secure steganography approaches require a pair of encoder and decoder to hide and extract private messages.
This paper proposes a novel provably robust and secure steganography framework for the asymmetric resource setting.
arXiv Detail & Related papers (2024-07-18T13:32:00Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Diffusion with Forward Models: Solving Stochastic Inverse Problems
Without Direct Supervision [76.32860119056964]
We propose a novel class of denoising diffusion probabilistic models that learn to sample from distributions of signals that are never directly observed.
We demonstrate the effectiveness of our method on three challenging computer vision tasks.
arXiv Detail & Related papers (2023-06-20T17:53:00Z) - SafeDiffuser: Safe Planning with Diffusion Probabilistic Models [97.80042457099718]
Diffusion model-based approaches have shown promise in data-driven planning, but there are no safety guarantees.
We propose a new method, called SafeDiffuser, to ensure diffusion probabilistic models satisfy specifications.
We test our method on a series of safe planning tasks, including maze path generation, legged robot locomotion, and 3D space manipulation.
arXiv Detail & Related papers (2023-05-31T19:38:12Z) - Perfectly Secure Steganography Using Minimum Entropy Coupling [60.154855689780796]
We show that a steganography procedure is perfectly secure under Cachin 1998's information-theoretic model of steganography.
We also show that, among perfectly secure procedures, a procedure maximizes information throughput if and only if it is induced by a minimum entropy coupling.
arXiv Detail & Related papers (2022-10-24T17:40:07Z) - Perturbing Across the Feature Hierarchy to Improve Standard and Strict
Blackbox Attack Transferability [100.91186458516941]
We consider the blackbox transfer-based targeted adversarial attack threat model in the realm of deep neural network (DNN) image classifiers.
We design a flexible attack framework that allows for multi-layer perturbations and demonstrates state-of-the-art targeted transfer performance.
We analyze why the proposed methods outperform existing attack strategies and show an extension of the method in the case when limited queries to the blackbox model are allowed.
arXiv Detail & Related papers (2020-04-29T16:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.