RoSteALS: Robust Steganography using Autoencoder Latent Space
- URL: http://arxiv.org/abs/2304.03400v1
- Date: Thu, 6 Apr 2023 22:14:26 GMT
- Title: RoSteALS: Robust Steganography using Autoencoder Latent Space
- Authors: Tu Bui, Shruti Agarwal, Ning Yu and John Collomosse
- Abstract summary: RoSteALS is a practical steganography technique leveraging frozen pretrained autoencoders to free the payload embedding from learning the distribution of cover images.
RoSteALS has a light-weight secret encoder of just 300k parameters, is easy to train, has perfect secret recovery performance and comparable image quality on three benchmarks.
- Score: 19.16770504267037
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Data hiding such as steganography and invisible watermarking has important
applications in copyright protection, privacy-preserved communication and
content provenance. Existing works often fall short in either preserving image
quality, or robustness against perturbations or are too complex to train. We
propose RoSteALS, a practical steganography technique leveraging frozen
pretrained autoencoders to free the payload embedding from learning the
distribution of cover images. RoSteALS has a light-weight secret encoder of
just 300k parameters, is easy to train, has perfect secret recovery performance
and comparable image quality on three benchmarks. Additionally, RoSteALS can be
adapted for novel cover-less steganography applications in which the cover
image can be sampled from noise or conditioned on text prompts via a denoising
diffusion process. Our model and code are available at
\url{https://github.com/TuBui/RoSteALS}.
Related papers
- TextDestroyer: A Training- and Annotation-Free Diffusion Method for Destroying Anomal Text from Images [84.08181780666698]
TextDestroyer is the first training- and annotation-free method for scene text destruction.
Our method scrambles text areas in the latent start code using a Gaussian distribution before reconstruction.
The advantages of TextDestroyer include: (1) it eliminates labor-intensive data annotation and resource-intensive training; (2) it achieves more thorough text destruction, preventing recognizable traces; and (3) it demonstrates better generalization capabilities, performing well on both real-world scenes and generated images.
arXiv Detail & Related papers (2024-11-01T04:41:00Z) - Provably Robust and Secure Steganography in Asymmetric Resource Scenario [30.12327233257552]
Current provably secure steganography approaches require a pair of encoder and decoder to hide and extract private messages.
This paper proposes a novel provably robust and secure steganography framework for the asymmetric resource setting.
arXiv Detail & Related papers (2024-07-18T13:32:00Z) - DiffStega: Towards Universal Training-Free Coverless Image Steganography with Diffusion Models [38.17146643777956]
Coverless image steganography (CIS) enhances imperceptibility by not using any cover image.
Recent works have utilized text prompts as keys in CIS through diffusion models.
We propose DiffStega, an innovative training-free diffusion-based CIS strategy for universal application.
arXiv Detail & Related papers (2024-07-15T06:15:49Z) - Robust Message Embedding via Attention Flow-Based Steganography [34.35209322360329]
Image steganography can hide information in a host image and obtain a stego image that is perceptually indistinguishable from the original one.
We propose a novel message embedding framework, called Robust Message Steganography (RMSteg), which is competent to hide message via QR Code in a host image.
arXiv Detail & Related papers (2024-05-26T03:16:40Z) - Recoverable Privacy-Preserving Image Classification through Noise-like
Adversarial Examples [26.026171363346975]
Cloud-based image related services such as classification have become crucial.
In this study, we propose a novel privacypreserving image classification scheme.
encrypted images can be decrypted back into their original form with high fidelity (recoverable) using a secret key.
arXiv Detail & Related papers (2023-10-19T13:01:58Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust [55.91987293510401]
Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content.
We introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs.
Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed.
arXiv Detail & Related papers (2023-05-31T17:00:31Z) - Discriminative Class Tokens for Text-to-Image Diffusion Models [107.98436819341592]
We propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text.
Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images.
We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier.
arXiv Detail & Related papers (2023-03-30T05:25:20Z) - Low-frequency Image Deep Steganography: Manipulate the Frequency
Distribution to Hide Secrets with Tenacious Robustness [29.645237618793963]
Low-frequency Image Deep Steganography (LIDS) allows frequency distribution manipulation in the embedding process.
LIDS achieves improved robustness against attacks that distort the high-frequency components of container images.
arXiv Detail & Related papers (2023-03-23T23:41:01Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.