DeepMorph: A System for Hiding Bitstrings in Morphable Vector Drawings
- URL: http://arxiv.org/abs/2011.09783v1
- Date: Thu, 19 Nov 2020 11:55:39 GMT
- Title: DeepMorph: A System for Hiding Bitstrings in Morphable Vector Drawings
- Authors: S{\o}ren Rasmussen, Karsten {\O}stergaard Noe, Oliver Gyldenberg
Hjermitslev and Henrik Pedersen
- Abstract summary: DeepMorph is an information embedding technique for vector drawings.
Our method embeds bitstrings in the image by perturbing the drawing primitives.
We demonstrate that our method reliably recovers bitstrings from real-world photos of printed drawings.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DeepMorph, an information embedding technique for vector
drawings. Provided a vector drawing, such as a Scalable Vector Graphics (SVG)
file, our method embeds bitstrings in the image by perturbing the drawing
primitives (lines, circles, etc.). This results in a morphed image that can be
decoded to recover the original bitstring. The use-case is similar to that of
the well-known QR code, but our solution provides creatives with artistic
freedom to transfer digital information via drawings of their own design. The
method comprises two neural networks, which are trained jointly: an encoder
network that transforms a bitstring into a perturbation of the drawing
primitives, and a decoder network that recovers the bitstring from an image of
the morphed drawing. To enable end-to-end training via back propagation, we
introduce a soft rasterizer, which is differentiable with respect to
perturbations of the drawing primitives. In order to add robustness towards
real-world image capture conditions, image corruptions are injected between the
soft rasterizer and the decoder. Further, the addition of an object detection
and camera pose estimation system enables decoding of drawings in complex
scenes as well as use of the drawings as markers for use in augmented reality
applications. We demonstrate that our method reliably recovers bitstrings from
real-world photos of printed drawings, thereby providing a novel solution for
creatives to transfer digital information via artistic imagery.
Related papers
- DeepIcon: A Hierarchical Network for Layer-wise Icon Vectorization [12.82009632507056]
Recent learning-based methods for converting images to vector formats frequently suffer from incomplete shapes, redundant path prediction, and a lack of accuracy in preserving the semantics of the original content.
We present DeepIcon, a novel hierarchical image vectorization network specifically tailored generating variable-length icon graphics based on the image input.
arXiv Detail & Related papers (2024-10-21T08:20:19Z) - A Compact Neural Network-based Algorithm for Robust Image Watermarking [30.727227627295548]
We propose a novel digital image watermarking solution with a compact neural network, named Invertible Watermarking Network (IWN)
Our IWN architecture is based on a single Invertible Neural Network (INN)
In order to enhance the robustness of our watermarking solution, we specifically introduce a simple but effective bit message normalization module.
arXiv Detail & Related papers (2021-12-27T03:20:45Z) - SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [95.45728042499836]
We propose a new paradigm of sketch-based image manipulation: mask-free local image manipulation.
Our model automatically predicts the target modification region and encodes it into a structure style vector.
A generator then synthesizes the new image content based on the style vector and sketch.
arXiv Detail & Related papers (2021-11-30T02:42:31Z) - From Image to Imuge: Immunized Image Generation [23.430377385327308]
Imuge is an image tamper resilient generative scheme for image self-recovery.
We jointly train a U-Net backboned encoder, a tamper localization network and a decoder for image recovery.
We demonstrate that our method can recover the details of the tampered regions with a high quality despite the presence of various kinds of attacks.
arXiv Detail & Related papers (2021-10-27T05:56:15Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - R-MNet: A Perceptual Adversarial Network for Image Inpainting [5.471225956329675]
We propose a Wasserstein GAN combined with a new reverse mask operator, namely Reverse Masking Network (R-MNet), a perceptual adversarial network for image inpainting.
We show that our method is able to generalize to high-resolution inpainting task, and further show more realistic outputs that are plausible to the human visual system.
arXiv Detail & Related papers (2020-08-11T10:58:10Z) - Swapping Autoencoder for Deep Image Manipulation [94.33114146172606]
We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation.
The key idea is to encode an image with two independent components and enforce that any swapped combination maps to a realistic image.
Experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models.
arXiv Detail & Related papers (2020-07-01T17:59:57Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z) - Semantic Image Manipulation Using Scene Graphs [105.03614132953285]
We introduce a-semantic scene graph network that does not require direct supervision for constellation changes or image edits.
This makes possible to train the system from existing real-world datasets with no additional annotation effort.
arXiv Detail & Related papers (2020-04-07T20:02:49Z) - In-Domain GAN Inversion for Real Image Editing [56.924323432048304]
A common practice of feeding a real image to a trained GAN generator is to invert it back to a latent code.
Existing inversion methods typically focus on reconstructing the target image by pixel values yet fail to land the inverted code in the semantic domain of the original latent space.
We propose an in-domain GAN inversion approach, which faithfully reconstructs the input image and ensures the inverted code to be semantically meaningful for editing.
arXiv Detail & Related papers (2020-03-31T18:20:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.