CRoSS: Diffusion Model Makes Controllable, Robust and Secure Image
Steganography
- URL: http://arxiv.org/abs/2305.16936v1
- Date: Fri, 26 May 2023 13:52:57 GMT
- Title: CRoSS: Diffusion Model Makes Controllable, Robust and Secure Image
Steganography
- Authors: Jiwen Yu, Xuanyu Zhang, Youmin Xu, Jian Zhang
- Abstract summary: We propose a novel image steganography framework, named Controllable, Robust and Secure Image Steganography (CRoSS)
CRoSS has significant advantages in controllability, robustness, and security compared to cover-based image steganography methods.
- Score: 15.705627450233504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current image steganography techniques are mainly focused on cover-based
methods, which commonly have the risk of leaking secret images and poor
robustness against degraded container images. Inspired by recent developments
in diffusion models, we discovered that two properties of diffusion models, the
ability to achieve translation between two images without training, and
robustness to noisy data, can be used to improve security and natural
robustness in image steganography tasks. For the choice of diffusion model, we
selected Stable Diffusion, a type of conditional diffusion model, and fully
utilized the latest tools from open-source communities, such as LoRAs and
ControlNets, to improve the controllability and diversity of container images.
In summary, we propose a novel image steganography framework, named
Controllable, Robust and Secure Image Steganography (CRoSS), which has
significant advantages in controllability, robustness, and security compared to
cover-based image steganography methods. These benefits are obtained without
additional training. To our knowledge, this is the first work to introduce
diffusion models to the field of image steganography. In the experimental
section, we conducted detailed experiments to demonstrate the advantages of our
proposed CRoSS framework in controllability, robustness, and security.
Related papers
- SteerDiff: Steering towards Safe Text-to-Image Diffusion Models [5.781285400461636]
Text-to-image (T2I) diffusion models can be misused to produce inappropriate content.
We introduce SteerDiff, a lightweight adaptor module designed to act as an intermediary between user input and the diffusion model.
We conduct extensive experiments across various concept unlearning tasks to evaluate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-03T17:34:55Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits [76.25962336540226]
JIGMARK is a first-of-its-kind watermarking technique that enhances robustness through contrastive learning.
Our evaluation reveals that JIGMARK significantly surpasses existing watermarking solutions in resilience to diffusion-model edits.
arXiv Detail & Related papers (2024-06-06T03:31:41Z) - Diffusion-Based Hierarchical Image Steganography [60.69791384893602]
Hierarchical Image Steganography is a novel method that enhances the security and capacity of embedding multiple images into a single container.
It exploits the robustness of the Diffusion Model alongside the reversibility of the Flow Model.
The innovative structure can autonomously generate a container image, thereby securely and efficiently concealing multiple images and text.
arXiv Detail & Related papers (2024-05-19T11:29:52Z) - Text-to-Image Diffusion Models are Great Sketch-Photo Matchmakers [120.49126407479717]
This paper explores text-to-image diffusion models for Zero-Shot Sketch-based Image Retrieval (ZS-SBIR)
We highlight a pivotal discovery: the capacity of text-to-image diffusion models to seamlessly bridge the gap between sketches and photos.
arXiv Detail & Related papers (2024-03-12T00:02:03Z) - DiffDis: Empowering Generative Diffusion Model with Cross-Modal
Discrimination Capability [75.9781362556431]
We propose DiffDis to unify the cross-modal generative and discriminative pretraining into one single framework under the diffusion process.
We show that DiffDis outperforms single-task models on both the image generation and the image-text discriminative tasks.
arXiv Detail & Related papers (2023-08-18T05:03:48Z) - Exposing the Fake: Effective Diffusion-Generated Images Detection [14.646957596560076]
This paper proposes a novel detection method called Stepwise Error for Diffusion-generated Image Detection (SeDID)
SeDID exploits the unique attributes of diffusion models, namely deterministic reverse and deterministic denoising errors.
Our work makes a pivotal contribution to distinguishing diffusion model-generated images, marking a significant step in the domain of artificial intelligence security.
arXiv Detail & Related papers (2023-07-12T16:16:37Z) - Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation [25.55296442023984]
We propose a method, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation.
This achievement holds significant importance in real-world scenarios, as it contributes to the protection of privacy and copyright against AI-generated content.
arXiv Detail & Related papers (2023-06-02T20:19:19Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.