DDIM-Driven Coverless Steganography Scheme with Real Key
- URL: http://arxiv.org/abs/2411.06486v2
- Date: Tue, 19 Nov 2024 04:31:31 GMT
- Title: DDIM-Driven Coverless Steganography Scheme with Real Key
- Authors: Mingyu Yu, Haonan Miao, Zhengping Jin, Sujuan Qin,
- Abstract summary: steganography embeds secret information into images by exploiting their redundancy.
In this work, we leverage the Denoising Diffusion Implicit Model (DDIM) to generate high-quality stego-images.
Our method offers low-image-correlation real-key protection by incorporating chaotic encryption.
- Score: 0.8892527836401771
- License:
- Abstract: Typical steganography embeds secret information into images by exploiting their redundancy. Since the visual imperceptibility of secret information is a key factor in scheme evaluation, conventional methods aim to balance this requirement with embedding capacity. Consequently, integrating emerging image generation models and secret transmission has been extensively explored to achieve a higher embedding capacity. Previous works mostly focus on generating stego-images with Generative Adversarial Networks (GANs) and usually rely on pseudo-keys, namely conditions or parameters involved in the generation process, which are related to secret images. However, studies on diffusion-based coverless steganography remain insufficient. In this work, we leverage the Denoising Diffusion Implicit Model (DDIM) to generate high-quality stego-images without introducing pseudo-keys, instead employing real keys to enhance security. Furthermore, our method offers low-image-correlation real-key protection by incorporating chaotic encryption. Another core innovation is that our method requires only one-time negotiation for multiple communications, unlike prior methods that necessitate negotiation for each interaction.
Related papers
- Fusion is all you need: Face Fusion for Customized Identity-Preserving Image Synthesis [7.099258248662009]
Text-to-image (T2I) models have significantly advanced the development of artificial intelligence.
However, existing T2I-based methods often struggle to accurately reproduce the appearance of individuals from a reference image.
We leverage the pre-trained UNet from Stable Diffusion to incorporate the target face image directly into the generation process.
arXiv Detail & Related papers (2024-09-27T19:31:04Z) - MFCLIP: Multi-modal Fine-grained CLIP for Generalizable Diffusion Face Forgery Detection [64.29452783056253]
The rapid development of photo-realistic face generation methods has raised significant concerns in society and academia.
Although existing approaches mainly capture face forgery patterns using image modality, other modalities like fine-grained noises and texts are not fully explored.
We propose a novel multi-modal fine-grained CLIP (MFCLIP) model, which mines comprehensive and fine-grained forgery traces across image-noise modalities.
arXiv Detail & Related papers (2024-09-15T13:08:59Z) - Cover-separable Fixed Neural Network Steganography via Deep Generative Models [37.08937194546323]
We propose a Cover-separable Fixed Neural Network Steganography, namely Cs-FNNS.
In Cs-FNNS, we propose a Steganographic Perturbation Search (SPS) algorithm to directly encode the secret data into an imperceptible perturbation.
We demonstrate the superior performance of the proposed method in terms of visual quality and undetectability.
arXiv Detail & Related papers (2024-07-16T05:47:06Z) - DiffStega: Towards Universal Training-Free Coverless Image Steganography with Diffusion Models [38.17146643777956]
Coverless image steganography (CIS) enhances imperceptibility by not using any cover image.
Recent works have utilized text prompts as keys in CIS through diffusion models.
We propose DiffStega, an innovative training-free diffusion-based CIS strategy for universal application.
arXiv Detail & Related papers (2024-07-15T06:15:49Z) - Latent Diffusion Models for Attribute-Preserving Image Anonymization [4.080920304681247]
This paper presents the first approach to image anonymization based on Latent Diffusion Models (LDMs)
We propose two LDMs for this purpose: CAFLaGE-Base exploits a combination of pre-trained ControlNets, and a new controlling mechanism designed to increase the distance between the real and anonymized images.
arXiv Detail & Related papers (2024-03-21T19:09:21Z) - Privacy-Preserving Diffusion Model Using Homomorphic Encryption [5.282062491549009]
We introduce a privacy-preserving stable diffusion framework leveraging homomorphic encryption, called HE-Diffusion.
We propose a novel min-distortion method that enables efficient partial image encryption.
We successfully implement HE-based privacy-preserving stable diffusion inference.
arXiv Detail & Related papers (2024-03-09T04:56:57Z) - Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis [65.7968515029306]
We propose a novel Coarse-to-Fine Latent Diffusion (CFLD) method for Pose-Guided Person Image Synthesis (PGPIS)
A perception-refined decoder is designed to progressively refine a set of learnable queries and extract semantic understanding of person images as a coarse-grained prompt.
arXiv Detail & Related papers (2024-02-28T06:07:07Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Perfectly Secure Steganography Using Minimum Entropy Coupling [60.154855689780796]
We show that a steganography procedure is perfectly secure under Cachin 1998's information-theoretic model of steganography.
We also show that, among perfectly secure procedures, a procedure maximizes information throughput if and only if it is induced by a minimum entropy coupling.
arXiv Detail & Related papers (2022-10-24T17:40:07Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z) - On Feature Normalization and Data Augmentation [55.115583969831]
Moment Exchange encourages the model to utilize the moment information also for recognition models.
We replace the moments of the learned features of one training image by those of another, and also interpolate the target labels.
As our approach is fast, operates entirely in feature space, and mixes different signals than prior methods, one can effectively combine it with existing augmentation approaches.
arXiv Detail & Related papers (2020-02-25T18:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.