Markpainting: Adversarial Machine Learning meets Inpainting
- URL: http://arxiv.org/abs/2106.00660v1
- Date: Tue, 1 Jun 2021 17:45:52 GMT
- Title: Markpainting: Adversarial Machine Learning meets Inpainting
- Authors: David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, Ross
Anderson
- Abstract summary: Inpainting is a learned technique that is used to populate masked or missing pieces in an image.
We show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information.
We show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove.
- Score: 17.52885087481822
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inpainting is a learned interpolation technique that is based on generative
modeling and used to populate masked or missing pieces in an image; it has wide
applications in picture editing and retouching. Recently, inpainting started
being used for watermark removal, raising concerns. In this paper we study how
to manipulate it using our markpainting technique. First, we show how an image
owner with access to an inpainting model can augment their image in such a way
that any attempt to edit it using that model will add arbitrary visible
information. We find that we can target multiple different models
simultaneously with our technique. This can be designed to reconstitute a
watermark if the editor had been trying to remove it. Second, we show that our
markpainting technique is transferable to models that have different
architectures or were trained on different datasets, so watermarks created
using it are difficult for adversaries to remove. Markpainting is novel and can
be used as a manipulation alarm that becomes visible in the event of
inpainting.
Related papers
- Paint by Inpaint: Learning to Add Image Objects by Removing Them First [8.399234415641319]
We train a diffusion model to inverse the inpainting process, effectively adding objects into images.
We provide detailed descriptions of the removed objects and a Large Language Model to convert these descriptions into diverse, natural-language instructions.
arXiv Detail & Related papers (2024-04-28T15:07:53Z) - A Somewhat Robust Image Watermark against Diffusion-based Editing Models [25.034612051522167]
Editing models based on diffusion models (DMs) have inadvertently introduced new challenges related to image copyright infringement and malicious editing.
We develop a novel technique, RIW (Robust Invisible Watermarking), to embed invisible watermarks.
Our technique ensures a high extraction accuracy of $96%$ for the invisible watermark after editing, compared to the $0%$ offered by conventional methods.
arXiv Detail & Related papers (2023-11-22T22:18:42Z) - Reference-based Painterly Inpainting via Diffusion: Crossing the Wild
Reference Domain Gap [80.19252970827552]
RefPaint is a novel task that crosses the wild reference domain gap and implants novel objects into artworks.
Our method enables creative painterly image inpainting with reference objects that would otherwise be difficult to achieve.
arXiv Detail & Related papers (2023-07-20T04:51:10Z) - PaintSeg: Training-free Segmentation via Painting [50.17936803209125]
PaintSeg is a new unsupervised method for segmenting objects without any training.
Inpainting and outpainting are alternated, with the former masking the foreground and filling in the background, and the latter masking the background while recovering the missing part of the foreground object.
Our experimental results demonstrate that PaintSeg outperforms existing approaches in coarse mask-prompt, box-prompt, and point-prompt segmentation tasks.
arXiv Detail & Related papers (2023-05-30T20:43:42Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z) - RePaint: Inpainting using Denoising Diffusion Probabilistic Models [161.74792336127345]
Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask.
We propose RePaint: A Denoising Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks.
We validate our method for both faces and general-purpose image inpainting using standard and extreme masks.
arXiv Detail & Related papers (2022-01-24T18:40:15Z) - SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [95.45728042499836]
We propose a new paradigm of sketch-based image manipulation: mask-free local image manipulation.
Our model automatically predicts the target modification region and encodes it into a structure style vector.
A generator then synthesizes the new image content based on the style vector and sketch.
arXiv Detail & Related papers (2021-11-30T02:42:31Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.