RoSteALS: Robust Steganography using Autoencoder Latent Space
- URL: http://arxiv.org/abs/2304.03400v1
- Date: Thu, 6 Apr 2023 22:14:26 GMT
- Title: RoSteALS: Robust Steganography using Autoencoder Latent Space
- Authors: Tu Bui, Shruti Agarwal, Ning Yu and John Collomosse
- Abstract summary: RoSteALS is a practical steganography technique leveraging frozen pretrained autoencoders to free the payload embedding from learning the distribution of cover images.
RoSteALS has a light-weight secret encoder of just 300k parameters, is easy to train, has perfect secret recovery performance and comparable image quality on three benchmarks.
- Score: 19.16770504267037
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Data hiding such as steganography and invisible watermarking has important
applications in copyright protection, privacy-preserved communication and
content provenance. Existing works often fall short in either preserving image
quality, or robustness against perturbations or are too complex to train. We
propose RoSteALS, a practical steganography technique leveraging frozen
pretrained autoencoders to free the payload embedding from learning the
distribution of cover images. RoSteALS has a light-weight secret encoder of
just 300k parameters, is easy to train, has perfect secret recovery performance
and comparable image quality on three benchmarks. Additionally, RoSteALS can be
adapted for novel cover-less steganography applications in which the cover
image can be sampled from noise or conditioned on text prompts via a denoising
diffusion process. Our model and code are available at
\url{https://github.com/TuBui/RoSteALS}.
Related papers
- DiffStega: Towards Universal Training-Free Coverless Image Steganography with Diffusion Models [38.17146643777956]
Coverless image steganography (CIS) enhances imperceptibility by not using any cover image.
Recent works have utilized text prompts as keys in CIS through diffusion models.
We propose DiffStega, an innovative training-free diffusion-based CIS strategy for universal application.
arXiv Detail & Related papers (2024-07-15T06:15:49Z) - PPRSteg: Printing and Photography Robust QR Code Steganography via Attention Flow-Based Model [35.831644960576035]
QR Code steganography aims to embed a non-natural image into a natural image and the restored QR Code is required to be recognizable.
We propose a novel framework, called Printing and Photography Robust Steganography (PPRSteg), which is competent to hide QR Code in a host image with unperceivable changes.
arXiv Detail & Related papers (2024-05-26T03:16:40Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Recoverable Privacy-Preserving Image Classification through Noise-like
Adversarial Examples [26.026171363346975]
Cloud-based image related services such as classification have become crucial.
In this study, we propose a novel privacypreserving image classification scheme.
encrypted images can be decrypted back into their original form with high fidelity (recoverable) using a secret key.
arXiv Detail & Related papers (2023-10-19T13:01:58Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust [55.91987293510401]
Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content.
We introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs.
Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed.
arXiv Detail & Related papers (2023-05-31T17:00:31Z) - CRoSS: Diffusion Model Makes Controllable, Robust and Secure Image
Steganography [15.705627450233504]
We propose a novel image steganography framework, named Controllable, Robust and Secure Image Steganography (CRoSS)
CRoSS has significant advantages in controllability, robustness, and security compared to cover-based image steganography methods.
arXiv Detail & Related papers (2023-05-26T13:52:57Z) - Discriminative Class Tokens for Text-to-Image Diffusion Models [107.98436819341592]
We propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text.
Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images.
We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier.
arXiv Detail & Related papers (2023-03-30T05:25:20Z) - Low-frequency Image Deep Steganography: Manipulate the Frequency
Distribution to Hide Secrets with Tenacious Robustness [29.645237618793963]
Low-frequency Image Deep Steganography (LIDS) allows frequency distribution manipulation in the embedding process.
LIDS achieves improved robustness against attacks that distort the high-frequency components of container images.
arXiv Detail & Related papers (2023-03-23T23:41:01Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.