An End-to-end Method for Producing Scanning-robust Stylized QR Codes
- URL: http://arxiv.org/abs/2011.07815v1
- Date: Mon, 16 Nov 2020 09:38:27 GMT
- Title: An End-to-end Method for Producing Scanning-robust Stylized QR Codes
- Authors: Hao Su, Jianwei Niu, Xuefeng Liu, Qingfeng Li, Ji Wan, Mingliang Xu,
Tao Ren
- Abstract summary: We propose a novel end-to-end method, named ArtCoder, to generate stylized QR codes.
The experimental results show that our stylized QR codes have high-quality in both the visual effect and the scanning-robustness.
- Score: 45.35370585928748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quick Response (QR) code is one of the most worldwide used two-dimensional
codes.~Traditional QR codes appear as random collections of black-and-white
modules that lack visual semantics and aesthetic elements, which inspires the
recent works to beautify the appearances of QR codes. However, these works
adopt fixed generation algorithms and therefore can only generate QR codes with
a pre-defined style. In this paper, combining the Neural Style Transfer
technique, we propose a novel end-to-end method, named ArtCoder, to generate
the stylized QR codes that are personalized, diverse, attractive, and
scanning-robust.~To guarantee that the generated stylized QR codes are still
scanning-robust, we propose a Sampling-Simulation layer, a module-based code
loss, and a competition mechanism. The experimental results show that our
stylized QR codes have high-quality in both the visual effect and the
scanning-robustness, and they are able to support the real-world application.
Related papers
- DiffQRCoder: Diffusion-based Aesthetic QR Code Generation with Scanning Robustness Guided Iterative Refinement [9.43230708612551]
We propose a novel Diffusion-based QR Code generator (DiffQRCoder) to craft both scannable and visually pleasing QR codes.
The proposed approach introduces Scanning-Robust Perceptual Guidance (SRPG), a new diffusion guidance for Diffusion Models.
Our approach robustly achieves over 95% SSR, demonstrating its capability for real-world applications.
arXiv Detail & Related papers (2024-09-10T09:22:35Z) - PPRSteg: Printing and Photography Robust QR Code Steganography via Attention Flow-Based Model [35.831644960576035]
QR Code steganography aims to embed a non-natural image into a natural image and the restored QR Code is required to be recognizable.
We propose a novel framework, called Printing and Photography Robust Steganography (PPRSteg), which is competent to hide QR Code in a host image with unperceivable changes.
arXiv Detail & Related papers (2024-05-26T03:16:40Z) - Diffusion-based Aesthetic QR Code Generation via Scanning-Robust Perceptual Guidance [9.905296922309157]
QR codes, prevalent in daily applications, lack visual appeal due to their conventional black-and-white design.
We introduce a novel diffusion-model-based aesthetic QR code generation pipeline, utilizing pre-trained ControlNet and guided iterative refinement.
With extensive quantitative, qualitative, and subjective experiments, the results demonstrate that the proposed approach can generate diverse aesthetic QR codes with flexibility in detail.
arXiv Detail & Related papers (2024-03-23T16:08:48Z) - Text2QR: Harmonizing Aesthetic Customization and Scanning Robustness for
Text-Guided QR Code Generation [38.281805719692194]
In the digital era, QR codes serve as a linchpin connecting virtual and physical realms.
prevailing methods grapple with the intrinsic challenge of balancing customization and scannability.
This paper introduces Text2QR, a pioneering approach leveraging stable-diffusion models.
arXiv Detail & Related papers (2024-03-11T06:03:31Z) - Not All Image Regions Matter: Masked Vector Quantization for
Autoregressive Image Generation [78.13793505707952]
Existing autoregressive models follow the two-stage generation paradigm that first learns a codebook in the latent space for image reconstruction and then completes the image generation autoregressively based on the learned codebook.
We propose a novel two-stage framework, which consists of Masked Quantization VAE (MQ-VAE) Stack model from modeling redundancy.
arXiv Detail & Related papers (2023-05-23T02:15:53Z) - Towards Accurate Image Coding: Improved Autoregressive Image Generation
with Dynamic Vector Quantization [73.52943587514386]
Existing vector quantization (VQ) based autoregressive models follow a two-stage generation paradigm.
We propose a novel two-stage framework: (1) Dynamic-Quantization VAE (DQ-VAE) which encodes image regions into variable-length codes based their information densities for accurate representation.
arXiv Detail & Related papers (2023-05-19T14:56:05Z) - 3D-Aware Encoding for Style-based Neural Radiance Fields [50.118687869198716]
We learn an inversion function to project an input image to the latent space of a NeRF generator and then synthesize novel views of the original image based on the latent code.
Compared with GAN inversion for 2D generative models, NeRF inversion not only needs to 1) preserve the identity of the input image, but also 2) ensure 3D consistency in generated novel views.
We propose a two-stage encoder for style-based NeRF inversion.
arXiv Detail & Related papers (2022-11-12T06:14:12Z) - VQFR: Blind Face Restoration with Vector-Quantized Dictionary and
Parallel Decoder [83.63843671885716]
We propose a VQ-based face restoration method -- VQFR.
VQFR takes advantage of high-quality low-level feature banks extracted from high-quality faces.
To further fuse low-level features from inputs while not "contaminating" the realistic details generated from the VQ codebook, we proposed a parallel decoder.
arXiv Detail & Related papers (2022-05-13T17:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.