Generative Steganography Diffusion
- URL: http://arxiv.org/abs/2305.03472v2
- Date: Wed, 6 Sep 2023 16:14:44 GMT
- Title: Generative Steganography Diffusion
- Authors: Ping Wei, Qing Zhou, Zichi Wang, Zhenxing Qian, Xinpeng Zhang, Sheng
Li
- Abstract summary: Generative steganography (GS) is an emerging technique that generates stego images directly from secret data.
Existing GS methods cannot completely recover the hidden secret data due to the lack of network invertibility.
We propose a novel scheme called "Generative Steganography Diffusion" (GSD) by devising an invertible diffusion model named "StegoDiffusion"
- Score: 42.60159212701425
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative steganography (GS) is an emerging technique that generates stego
images directly from secret data. Various GS methods based on GANs or Flow have
been developed recently. However, existing GAN-based GS methods cannot
completely recover the hidden secret data due to the lack of network
invertibility, while Flow-based methods produce poor image quality due to the
stringent reversibility restriction in each module. To address this issue, we
propose a novel GS scheme called "Generative Steganography Diffusion" (GSD) by
devising an invertible diffusion model named "StegoDiffusion". It not only
generates realistic stego images but also allows for 100\% recovery of the
hidden secret data. The proposed StegoDiffusion model leverages a non-Markov
chain with a fast sampling technique to achieve efficient stego image
generation. By constructing an ordinary differential equation (ODE) based on
the transition probability of the generation process in StegoDiffusion, secret
data and stego images can be converted to each other through the approximate
solver of ODE -- Euler iteration formula, enabling the use of irreversible but
more expressive network structures to achieve model invertibility. Our proposed
GSD has the advantages of both reversibility and high performance,
significantly outperforming existing GS methods in all metrics.
Related papers
- Time Step Generating: A Universal Synthesized Deepfake Image Detector [0.4488895231267077]
We propose a universal synthetic image detector Time Step Generating (TSG)
TSG does not rely on pre-trained models' reconstructing ability, specific datasets, or sampling algorithms.
We test the proposed TSG on the large-scale GenImage benchmark and it achieves significant improvements in both accuracy and generalizability.
arXiv Detail & Related papers (2024-11-17T09:39:50Z) - Plug-and-Hide: Provable and Adjustable Diffusion Generative Steganography [40.357567971092564]
Generative Steganography (GS) is a technique that utilizes generative models to conceal messages without relying on cover images.
GS algorithms leverage the powerful generative capabilities of Diffusion Models (DMs) to create high-fidelity stego images.
In this paper, we rethink the trade-off among image quality, steganographic security, and message extraction accuracy within Diffusion Generative Steganography (DGS) settings.
arXiv Detail & Related papers (2024-09-07T18:06:47Z) - Few-Shot Image Generation by Conditional Relaxing Diffusion Inversion [37.18537753482751]
Conditional Diffusion Relaxing Inversion (CRDI) is designed to enhance distribution diversity in synthetic image generation.
CRDI does not rely on fine-tuning based on only a few samples.
It focuses on reconstructing each target image instance and expanding diversity through few-shot learning.
arXiv Detail & Related papers (2024-07-09T21:58:26Z) - Glauber Generative Model: Discrete Diffusion Models via Binary Classification [21.816933208895843]
We introduce the Glauber Generative Model (GGM), a new class of discrete diffusion models.
GGM deploys a Markov chain to denoise a sequence of noisy tokens to a sample from a joint distribution of discrete tokens.
We show that it outperforms existing discrete diffusion models in language generation and image generation.
arXiv Detail & Related papers (2024-05-27T10:42:13Z) - Generalized Consistency Trajectory Models for Image Manipulation [59.576781858809355]
Diffusion models (DMs) excel in unconditional generation, as well as on applications such as image editing and restoration.
This work aims to unlock the full potential of consistency trajectory models (CTMs) by proposing generalized CTMs (GCTMs)
We discuss the design space of GCTMs and demonstrate their efficacy in various image manipulation tasks such as image-to-image translation, restoration, and editing.
arXiv Detail & Related papers (2024-03-19T07:24:54Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Generative Steganographic Flow [39.64952038237487]
Generative steganography (GS) is a new data hiding manner, featuring direct generation of stego media from secret data.
Existing GS methods are generally criticized for their poor performances.
We propose a novel flow based GS approach -- Generative Steganographic Flow (GSF)
arXiv Detail & Related papers (2023-05-10T02:02:20Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Deep Neural Networks are Surprisingly Reversible: A Baseline for
Zero-Shot Inversion [90.65667807498086]
This paper presents a zero-shot direct model inversion framework that recovers the input to the trained model given only the internal representation.
We empirically show that modern classification models on ImageNet can, surprisingly, be inverted, allowing an approximate recovery of the original 224x224px images from a representation after more than 20 layers.
arXiv Detail & Related papers (2021-07-13T18:01:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.