Deep Saliency Prior for Reducing Visual Distraction
- URL: http://arxiv.org/abs/2109.01980v1
- Date: Sun, 5 Sep 2021 03:19:21 GMT
- Title: Deep Saliency Prior for Reducing Visual Distraction
- Authors: Kfir Aberman, Junfeng He, Yossi Gandelsman, Inbar Mosseri, David E.
Jacobs, Kai Kohlhoff, Yael Pritch, Michael Rubinstein
- Abstract summary: We produce a range of powerful editing effects for reducing distraction in images.
The resulting effects are consistent with cognitive research on the human visual system.
We present results on a variety of natural images and conduct a perceptual study to evaluate and validate the changes in viewers' eye-gaze between the original images and our edited results.
- Score: 12.28561668097479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using only a model that was trained to predict where people look at images,
and no additional training data, we can produce a range of powerful editing
effects for reducing distraction in images. Given an image and a mask
specifying the region to edit, we backpropagate through a state-of-the-art
saliency model to parameterize a differentiable editing operator, such that the
saliency within the masked region is reduced. We demonstrate several operators,
including: a recoloring operator, which learns to apply a color transform that
camouflages and blends distractors into their surroundings; a warping operator,
which warps less salient image regions to cover distractors, gradually
collapsing objects into themselves and effectively removing them (an effect
akin to inpainting); a GAN operator, which uses a semantic prior to fully
replace image regions with plausible, less salient alternatives. The resulting
effects are consistent with cognitive research on the human visual system
(e.g., since color mismatch is salient, the recoloring operator learns to
harmonize objects' colors with their surrounding to reduce their saliency),
and, importantly, are all achieved solely through the guidance of the
pretrained saliency model, with no additional supervision. We present results
on a variety of natural images and conduct a perceptual study to evaluate and
validate the changes in viewers' eye-gaze between the original images and our
edited results.
Related papers
- DiffUHaul: A Training-Free Method for Object Dragging in Images [78.93531472479202]
We propose a training-free method, dubbed DiffUHaul, for the object dragging task.
We first apply attention masking in each denoising step to make the generation more disentangled across different objects.
In the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance.
arXiv Detail & Related papers (2024-06-03T17:59:53Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Color Invariant Skin Segmentation [17.501659517108884]
This paper addresses the problem of automatically detecting human skin in images without reliance on color information.
A primary motivation of the work has been to achieve results that are consistent across the full range of skin tones.
We present a new approach that performs well in the absence of such information.
arXiv Detail & Related papers (2022-04-21T05:07:21Z) - LTT-GAN: Looking Through Turbulence by Inverting GANs [86.25869403782957]
We propose the first turbulence mitigation method that makes use of visual priors encapsulated by a well-trained GAN.
Based on the visual priors, we propose to learn to preserve the identity of restored images on a periodic contextual distance.
Our method significantly outperforms prior art in both the visual quality and face verification accuracy of restored results.
arXiv Detail & Related papers (2021-12-04T16:42:13Z) - Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space
Navigation [136.53288628437355]
Controllable semantic image editing enables a user to change entire image attributes with few clicks.
Current approaches often suffer from attribute edits that are entangled, global image identity changes, and diminished photo-realism.
We propose quantitative evaluation strategies for measuring controllable editing performance, unlike prior work which primarily focuses on qualitative evaluation.
arXiv Detail & Related papers (2021-02-01T21:38:36Z) - Color and Edge-Aware Adversarial Image Perturbations [0.0]
We develop two new methods for constructing adversarial perturbations.
The Edge-Aware method reduces the magnitude of perturbations permitted in smooth regions of an image.
The Color-Aware and Edge-Aware methods can also be implemented simultaneously.
arXiv Detail & Related papers (2020-08-28T03:02:20Z) - Look here! A parametric learning based approach to redirect visual
attention [49.609412873346386]
We introduce an automatic method to make an image region more attention-capturing via subtle image edits.
Our model predicts a distinct set of global parametric transformations to be applied to the foreground and background image regions.
Our edits enable inference at interactive rates on any image size, and easily generalize to videos.
arXiv Detail & Related papers (2020-08-12T16:08:36Z) - Disentangle Perceptual Learning through Online Contrastive Learning [16.534353501066203]
Pursuing realistic results according to human visual perception is the central concern in the image transformation tasks.
In this paper, we argue that, among the features representation from the pre-trained classification network, only limited dimensions are related to human visual perception.
Under such an assumption, we try to disentangle the perception-relevant dimensions from the representation through our proposed online contrastive learning.
arXiv Detail & Related papers (2020-06-24T06:48:38Z) - Watching the World Go By: Representation Learning from Unlabeled Videos [78.22211989028585]
Recent single image unsupervised representation learning techniques show remarkable success on a variety of tasks.
In this paper, we argue that videos offer this natural augmentation for free.
We propose Video Noise Contrastive Estimation, a method for using unlabeled video to learn strong, transferable single image representations.
arXiv Detail & Related papers (2020-03-18T00:07:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.