PSyDUCK: Training-Free Steganography for Latent Diffusion
- URL: http://arxiv.org/abs/2501.19172v2
- Date: Sat, 08 Mar 2025 19:32:30 GMT
- Title: PSyDUCK: Training-Free Steganography for Latent Diffusion
- Authors: Aqib Mahfuz, Georgia Channing, Mark van der Wilk, Philip Torr, Fabio Pizzati, Christian Schroeder de Witt,
- Abstract summary: PSyDUCK is a training-free, model-agnostic steganography framework specifically designed for latent diffusion models.<n>Our method dynamically adapts embedding strength to balance accuracy and detectability, significantly improving upon existing pixel-space approaches.
- Score: 22.17835886086284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in generative AI have opened promising avenues for steganography, which can securely protect sensitive information for individuals operating in hostile environments, such as journalists, activists, and whistleblowers. However, existing methods for generative steganography have significant limitations, particularly in scalability and their dependence on retraining diffusion models. We introduce PSyDUCK, a training-free, model-agnostic steganography framework specifically designed for latent diffusion models. PSyDUCK leverages controlled divergence and local mixing within the latent denoising process, enabling high-capacity, secure message embedding without compromising visual fidelity. Our method dynamically adapts embedding strength to balance accuracy and detectability, significantly improving upon existing pixel-space approaches. Crucially, PSyDUCK extends generative steganography to latent-space video diffusion models, surpassing previous methods in both encoding capacity and robustness. Extensive experiments demonstrate PSyDUCK's superiority over state-of-the-art techniques, achieving higher transmission accuracy and lower detectability rates across diverse image and video datasets. By overcoming the key challenges associated with latent diffusion model architectures, PSyDUCK sets a new standard for generative steganography, paving the way for scalable, real-world steganographic applications.
Related papers
- InstaRevive: One-Step Image Enhancement via Dynamic Score Matching [66.97989469865828]
InstaRevive is an image enhancement framework that employs score-based diffusion distillation to harness potent generative capability.
Our framework delivers high-quality and visually appealing results across a diverse array of challenging tasks and datasets.
arXiv Detail & Related papers (2025-04-22T01:19:53Z) - One-Step Diffusion Model for Image Motion-Deblurring [85.76149042561507]
We propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step.
To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration.
Our method achieves strong performance on both full and no-reference metrics.
arXiv Detail & Related papers (2025-03-09T09:39:57Z) - InpDiffusion: Image Inpainting Localization via Conditional Diffusion Models [10.213390634031049]
Current IIL methods face two main challenges: a tendency towards overconfidence and difficulty in detecting subtle tampering boundaries.
We propose a new paradigm that treats IIL as a conditional mask generation task utilizing diffusion models.
Our method, InpDiffusion, utilizes the denoising process enhanced by the integration of image semantic conditions to progressively refine predictions.
arXiv Detail & Related papers (2025-01-06T07:32:12Z) - A Novel Approach to Image Steganography Using Generative Adversarial Networks [0.0]
We propose a novel approach to image steganography that leverages the power of generative adversarial networks (GANs)
By employing a carefully designed GAN architecture, our method ensures the creation of stego-images that are visually indistinguishable from their original counterparts.
Our results demonstrate significant improvements in metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and robustness against detection.
arXiv Detail & Related papers (2024-11-27T14:34:41Z) - Diffusion-Based Hierarchical Image Steganography [60.69791384893602]
Hierarchical Image Steganography is a novel method that enhances the security and capacity of embedding multiple images into a single container.
It exploits the robustness of the Diffusion Model alongside the reversibility of the Flow Model.
The innovative structure can autonomously generate a container image, thereby securely and efficiently concealing multiple images and text.
arXiv Detail & Related papers (2024-05-19T11:29:52Z) - Digging into contrastive learning for robust depth estimation with diffusion models [55.62276027922499]
We propose a novel robust depth estimation method called D4RD.
It features a custom contrastive learning mode tailored for diffusion models to mitigate performance degradation in complex environments.
In experiments, D4RD surpasses existing state-of-the-art solutions on synthetic corruption datasets and real-world weather conditions.
arXiv Detail & Related papers (2024-04-15T14:29:47Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - SODA: Bottleneck Diffusion Models for Representation Learning [75.7331354734152]
We introduce SODA, a self-supervised diffusion model, designed for representation learning.
The model incorporates an image encoder, which distills a source view into a compact representation, that guides the generation of related novel views.
We show that by imposing a tight bottleneck between the encoder and a denoising decoder, we can turn diffusion models into strong representation learners.
arXiv Detail & Related papers (2023-11-29T18:53:34Z) - Global Structure-Aware Diffusion Process for Low-Light Image Enhancement [64.69154776202694]
This paper studies a diffusion-based framework to address the low-light image enhancement problem.
We advocate for the regularization of its inherent ODE-trajectory.
Experimental evaluations reveal that the proposed framework attains distinguished performance in low-light enhancement.
arXiv Detail & Related papers (2023-10-26T17:01:52Z) - Unlearnable Examples for Diffusion Models: Protect Data from Unauthorized Exploitation [25.55296442023984]
We propose a method, Unlearnable Diffusion Perturbation, to safeguard images from unauthorized exploitation.
This achievement holds significant importance in real-world scenarios, as it contributes to the protection of privacy and copyright against AI-generated content.
arXiv Detail & Related papers (2023-06-02T20:19:19Z) - Low-Light Image Enhancement with Wavelet-based Diffusion Models [50.632343822790006]
Diffusion models have achieved promising results in image restoration tasks, yet suffer from time-consuming, excessive computational resource consumption, and unstable restoration.
We propose a robust and efficient Diffusion-based Low-Light image enhancement approach, dubbed DiffLL.
arXiv Detail & Related papers (2023-06-01T03:08:28Z) - CRoSS: Diffusion Model Makes Controllable, Robust and Secure Image
Steganography [15.705627450233504]
We propose a novel image steganography framework, named Controllable, Robust and Secure Image Steganography (CRoSS)
CRoSS has significant advantages in controllability, robustness, and security compared to cover-based image steganography methods.
arXiv Detail & Related papers (2023-05-26T13:52:57Z) - Diffusion Models as Masked Autoencoders [52.442717717898056]
We revisit generatively pre-training visual representations in light of recent interest in denoising diffusion models.
While directly pre-training with diffusion models does not produce strong representations, we condition diffusion models on masked input and formulate diffusion models as masked autoencoders (DiffMAE)
We perform a comprehensive study on the pros and cons of design choices and build connections between diffusion models and masked autoencoders.
arXiv Detail & Related papers (2023-04-06T17:59:56Z) - Towards Robust Image-in-Audio Deep Steganography [14.1081872409308]
This paper extends and enhances an existing image-in-audio deep steganography method by focusing on improving its robustness.
The proposed enhancements include modifications to the loss function, utilization of the Short-Time Fourier Transform (STFT), introduction of redundancy in the encoding process for error correction, and buffering of additional information in the pixel subconvolution operation.
arXiv Detail & Related papers (2023-03-09T03:16:04Z) - Data Forensics in Diffusion Models: A Systematic Analysis of Membership
Privacy [62.16582309504159]
We develop a systematic analysis of membership inference attacks on diffusion models and propose novel attack methods tailored to each attack scenario.
Our approach exploits easily obtainable quantities and is highly effective, achieving near-perfect attack performance (>0.9 AUCROC) in realistic scenarios.
arXiv Detail & Related papers (2023-02-15T17:37:49Z) - Latent Diffusion for Language Generation [26.620353485679892]
Recent attempts to adapt diffusion to language have presented diffusion as an alternative to existing language models.
We demonstrate that encoder-decoder language models can be utilized to efficiently learn high-quality language autoencoders.
We validate the effectiveness of our approach for unconditional, class-conditional, and sequence-to-sequence language generation.
arXiv Detail & Related papers (2022-12-19T13:57:06Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.