PPRSteg: Printing and Photography Robust QR Code Steganography via Attention Flow-Based Model
- URL: http://arxiv.org/abs/2405.16414v1
- Date: Sun, 26 May 2024 03:16:40 GMT
- Title: PPRSteg: Printing and Photography Robust QR Code Steganography via Attention Flow-Based Model
- Authors: Huayuan Ye, Shenzhuo Zhang, Shiqi Jiang, Jing Liao, Shuhang Gu, Changbo Wang, Chenhui Li,
- Abstract summary: QR Code steganography aims to embed a non-natural image into a natural image and the restored QR Code is required to be recognizable.
We propose a novel framework, called Printing and Photography Robust Steganography (PPRSteg), which is competent to hide QR Code in a host image with unperceivable changes.
- Score: 35.831644960576035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image steganography can hide information in a host image and obtain a stego image that is perceptually indistinguishable from the original one. This technique has tremendous potential in scenarios like copyright protection, information retrospection, etc. Some previous studies have proposed to enhance the robustness of the methods against image disturbances to increase their applicability. However, they generally cannot achieve a satisfying balance between the steganography quality and robustness. In this paper, we focus on the issue of QR Code steganography that is robust to real-world printing and photography. Different from common image steganography, QR Code steganography aims to embed a non-natural image into a natural image and the restored QR Code is required to be recognizable, which increases the difficulty of data concealing and revealing. Inspired by the recent developments in transformer-based vision models, we discover that the tokenized representation of images is naturally suitable for steganography. In this paper, we propose a novel QR Code embedding framework, called Printing and Photography Robust Steganography (PPRSteg), which is competent to hide QR Code in a host image with unperceivable changes and can restore it even if the stego image is printed out and photoed. We outline a transition process to reduce the artifacts in stego images brought by QR Codes. We also propose a steganography model based on normalizing flow, which combines the attention mechanism to enhance its performance. To our best knowledge, this is the first work that integrates the advantages of transformer models into normalizing flow. We conduct comprehensive and detailed experiments to demonstrate the effectiveness of our method and the result shows that PPRSteg has great potential in robust, secure and high-quality QR Code steganography.
Related papers
- Provably Secure Robust Image Steganography via Cross-Modal Error Correction [23.087977275900396]
We propose a high-quality, provably secure, and robust image steganography method based on state-of-the-art autoregressive (AR) image generation models.
We employ a cross-modal error-correction framework that generates stego text from stego images to aid in restoring lossy images.
arXiv Detail & Related papers (2024-12-15T16:10:10Z) - DiffStega: Towards Universal Training-Free Coverless Image Steganography with Diffusion Models [38.17146643777956]
Coverless image steganography (CIS) enhances imperceptibility by not using any cover image.
Recent works have utilized text prompts as keys in CIS through diffusion models.
We propose DiffStega, an innovative training-free diffusion-based CIS strategy for universal application.
arXiv Detail & Related papers (2024-07-15T06:15:49Z) - Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis [65.7968515029306]
We propose a novel Coarse-to-Fine Latent Diffusion (CFLD) method for Pose-Guided Person Image Synthesis (PGPIS)
A perception-refined decoder is designed to progressively refine a set of learnable queries and extract semantic understanding of person images as a coarse-grained prompt.
arXiv Detail & Related papers (2024-02-28T06:07:07Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - Text-to-image Editing by Image Information Removal [19.464349486031566]
We propose a text-to-image editing model with an Image Information Removal module (IIR) that selectively erases color-related and texture-related information from the original image.
Our experiments on CUB, Outdoor Scenes, and COCO shows that our edited images are preferred 35% more often than prior work.
arXiv Detail & Related papers (2023-05-27T14:48:05Z) - Image Steganography based on Style Transfer [12.756859984638961]
We propose image steganography network based on style transfer.
We embed secret information while transforming the content image style.
In latent space, the secret information is integrated into the latent representation of the cover image to generate the stego images.
arXiv Detail & Related papers (2022-03-09T02:58:29Z) - News Image Steganography: A Novel Architecture Facilitates the Fake News
Identification [52.83247667841588]
A larger portion of fake news quotes untampered images from other sources with ulterior motives.
This paper proposes an architecture named News Image Steganography to reveal the inconsistency through image steganography based on GAN.
arXiv Detail & Related papers (2021-01-03T11:12:23Z) - PerceptionGAN: Real-world Image Construction from Provided Text through
Perceptual Understanding [11.985768957782641]
We propose a method to provide good images by incorporating perceptual understanding in the discriminator module.
We show that the perceptual information included in the initial image is improved while modeling image distribution at multiple stages.
More importantly, the proposed method can be integrated into the pipeline of other state-of-the-art text-based-image-generation models.
arXiv Detail & Related papers (2020-07-02T09:23:08Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z) - Learning Transformation-Aware Embeddings for Image Forensics [15.484408315588569]
Image Provenance Analysis aims at discovering relationships among different manipulated image versions that share content.
One of the main sub-problems for provenance analysis that has not yet been addressed directly is the edit ordering of images that share full content or are near-duplicates.
This paper introduces a novel deep learning-based approach to provide a plausible ordering to images that have been generated from a single image through transformations.
arXiv Detail & Related papers (2020-01-13T22:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.