Distortion Agnostic Deep Watermarking
- URL: http://arxiv.org/abs/2001.04580v1
- Date: Tue, 14 Jan 2020 01:04:59 GMT
- Title: Distortion Agnostic Deep Watermarking
- Authors: Xiyang Luo, Ruohan Zhan, Huiwen Chang, Feng Yang, Peyman Milanfar
- Abstract summary: We propose a new framework for distortion-agnostic watermarking.
The robustness of our system comes from two sources: adversarial training and channel coding.
- Score: 21.493370728114815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Watermarking is the process of embedding information into an image that can
survive under distortions, while requiring the encoded image to have little or
no perceptual difference from the original image. Recently, deep learning-based
methods achieved impressive results in both visual quality and message payload
under a wide variety of image distortions. However, these methods all require
differentiable models for the image distortions at training time, and may
generalize poorly to unknown distortions. This is undesirable since the types
of distortions applied to watermarked images are usually unknown and
non-differentiable. In this paper, we propose a new framework for
distortion-agnostic watermarking, where the image distortion is not explicitly
modeled during training. Instead, the robustness of our system comes from two
sources: adversarial training and channel coding. Compared to training on a
fixed set of distortions and noise levels, our method achieves comparable or
better results on distortions available during training, and better performance
on unknown distortions.
Related papers
- Predicting the Reliability of an Image Classifier under Image Distortion [48.866196348385]
In image classification tasks, deep learning models are vulnerable to image distortions.
For a quality control purpose, it is important to predict if the image-classifier is unreliable/reliable under a distortion level.
Our solution is to construct a training set consisting of distortion levels along with their "non-reliable" or "reliable" labels, and train a machine learning predictive model (called distortion-classifier) to classify unseen distortion levels.
arXiv Detail & Related papers (2024-12-22T06:21:06Z) - Data Attribution for Text-to-Image Models by Unlearning Synthesized Images [71.23012718682634]
The goal of data attribution for text-to-image models is to identify the training images that most influence the generation of a new image.
We propose an efficient data attribution method by simulating unlearning the synthesized image.
We then identify training images with significant loss deviations after the unlearning process and label these as influential.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - ARNIQA: Learning Distortion Manifold for Image Quality Assessment [28.773037051085318]
No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
arXiv Detail & Related papers (2023-10-20T17:22:25Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - Data Generation using Texture Co-occurrence and Spatial Self-Similarity
for Debiasing [6.976822832216875]
We propose a novel de-biasing approach that explicitly generates additional images using texture representations of oppositely labeled images.
Every new generated image contains similar spatial information from a source image while transferring textures from a target image of opposite label.
Our model integrates a texture co-occurrence loss that determines whether a generated image's texture is similar to that of the target, and a spatial self-similarity loss that determines whether the spatial details between the generated and source images are well preserved.
arXiv Detail & Related papers (2021-10-15T08:04:59Z) - SIR: Self-supervised Image Rectification via Seeing the Same Scene from
Multiple Different Lenses [82.56853587380168]
We propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of the same scene from different lens should be the same.
We leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters.
Our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art methods.
arXiv Detail & Related papers (2020-11-30T08:23:25Z) - Generative and Discriminative Learning for Distorted Image Restoration [22.230017059874445]
Liquify is a technique for image editing, which can be used for image distortion.
We propose a novel generative and discriminative learning method based on deep neural networks.
arXiv Detail & Related papers (2020-11-11T14:01:29Z) - A Deep Ordinal Distortion Estimation Approach for Distortion Rectification [62.72089758481803]
We propose a novel distortion rectification approach that can obtain more accurate parameters with higher efficiency.
We design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution.
Considering the redundancy of distortion information, our approach only uses a part of distorted image for the ordinal distortion estimation.
arXiv Detail & Related papers (2020-07-21T10:03:42Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.