Hiding Images in Deep Probabilistic Models
- URL: http://arxiv.org/abs/2210.02257v1
- Date: Wed, 5 Oct 2022 13:33:25 GMT
- Title: Hiding Images in Deep Probabilistic Models
- Authors: Haoyu Chen, Linqi Song, Zhenxing Qian, Xinpeng Zhang, Kede Ma
- Abstract summary: We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
- Score: 58.23127414572098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data hiding with deep neural networks (DNNs) has experienced impressive
successes in recent years. A prevailing scheme is to train an autoencoder,
consisting of an encoding network to embed (or transform) secret messages in
(or into) a carrier, and a decoding network to extract the hidden messages.
This scheme may suffer from several limitations regarding practicability,
security, and embedding capacity. In this work, we describe a different
computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images,
and hide a secret image in one particular location of the learned distribution.
As an instantiation, we adopt a SinGAN, a pyramid of generative adversarial
networks (GANs), to learn the patch distribution of one cover image. We hide
the secret image by fitting a deterministic mapping from a fixed set of noise
maps (generated by an embedding key) to the secret image during patch
distribution learning. The stego SinGAN, behaving as the original SinGAN, is
publicly communicated; only the receiver with the embedding key is able to
extract the secret image. We demonstrate the feasibility of our SinGAN approach
in terms of extraction accuracy and model security. Moreover, we show the
flexibility of the proposed method in terms of hiding multiple images for
different receivers and obfuscating the secret image.
Related papers
- StegaINR4MIH: steganography by implicit neural representation for multi-image hiding [6.29495604869364]
Multi-image hiding, which embeds multiple secret images into a cover image, has gradually become a research hotspot in the field of image steganography.
We propose StegaINR4MIH, a novel implicit neural representation steganography framework that enables the hiding of multiple images within a single implicit representation function.
arXiv Detail & Related papers (2024-10-14T03:09:41Z) - Cover-separable Fixed Neural Network Steganography via Deep Generative Models [37.08937194546323]
We propose a Cover-separable Fixed Neural Network Steganography, namely Cs-FNNS.
In Cs-FNNS, we propose a Steganographic Perturbation Search (SPS) algorithm to directly encode the secret data into an imperceptible perturbation.
We demonstrate the superior performance of the proposed method in terms of visual quality and undetectability.
arXiv Detail & Related papers (2024-07-16T05:47:06Z) - Double-Flow-based Steganography without Embedding for Image-to-Image
Hiding [14.024920153517174]
steganography without embedding (SWE) hides a secret message without directly embedding it into a cover.
SWE has the unique advantage of being immune to typical steganalysis methods and can better protect the secret message from being exposed.
Existing SWE methods are generally criticized for their poor payload capacity and low fidelity of recovered secret messages.
arXiv Detail & Related papers (2023-11-25T13:44:37Z) - Securing Fixed Neural Network Steganography [37.08937194546323]
Image steganography is the art of concealing secret information in images in a way that is imperceptible to unauthorized parties.
Recent advances show that is possible to use a fixed neural network (FNN) for secret embedding and extraction.
We propose a key-based FNNS scheme to improve the security of the FNNS.
arXiv Detail & Related papers (2023-09-18T12:07:37Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Extracting Semantic Knowledge from GANs with Unsupervised Learning [65.32631025780631]
Generative Adversarial Networks (GANs) encode semantics in feature maps in a linearly separable form.
We propose a novel clustering algorithm, named KLiSH, which leverages the linear separability to cluster GAN's features.
KLiSH succeeds in extracting fine-grained semantics of GANs trained on datasets of various objects.
arXiv Detail & Related papers (2022-11-30T03:18:16Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Generative Steganography Network [37.182458848616754]
We propose an advanced generative steganography network (GSN) that can generate realistic stego images without using cover images.
A module named secret block is designed delicately to conceal secret data in the feature maps during image generation.
arXiv Detail & Related papers (2022-07-28T03:34:37Z) - Syfer: Neural Obfuscation for Private Data Release [58.490998583666276]
We develop Syfer, a neural obfuscation method to protect against re-identification attacks.
Syfer composes trained layers with random neural networks to encode the original data.
It maintains the ability to predict diagnoses from the encoded data.
arXiv Detail & Related papers (2022-01-28T20:32:04Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.