SimpSON: Simplifying Photo Cleanup with Single-Click Distracting Object
Segmentation Network
- URL: http://arxiv.org/abs/2305.17624v1
- Date: Sun, 28 May 2023 04:05:24 GMT
- Title: SimpSON: Simplifying Photo Cleanup with Single-Click Distracting Object
Segmentation Network
- Authors: Chuong Huynh, Yuqian Zhou, Zhe Lin, Connelly Barnes, Eli Shechtman,
Sohrab Amirghodsi, Abhinav Shrivastava
- Abstract summary: We propose an interactive distractor selection method that is optimized to achieve the task with just a single click.
Our method surpasses the precision and recall achieved by the traditional method of running panoptic segmentation.
Our experiments demonstrate that the model can effectively and accurately segment unknown distracting objects interactively and in groups.
- Score: 70.89436857471887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In photo editing, it is common practice to remove visual distractions to
improve the overall image quality and highlight the primary subject. However,
manually selecting and removing these small and dense distracting regions can
be a laborious and time-consuming task. In this paper, we propose an
interactive distractor selection method that is optimized to achieve the task
with just a single click. Our method surpasses the precision and recall
achieved by the traditional method of running panoptic segmentation and then
selecting the segments containing the clicks. We also showcase how a
transformer-based module can be used to identify more distracting regions
similar to the user's click position. Our experiments demonstrate that the
model can effectively and accurately segment unknown distracting objects
interactively and in groups. By significantly simplifying the photo cleaning
and retouching process, our proposed model provides inspiration for exploring
rare object segmentation and group selection with a single click.
Related papers
- Learning from Exemplars for Interactive Image Segmentation [15.37506525730218]
We introduce novel interactive segmentation frameworks for both a single object and multiple objects in the same category.
Our model reduces users' labor by around 15%, requiring two fewer clicks to achieve target IoUs 85% and 90%.
arXiv Detail & Related papers (2024-06-17T12:38:01Z) - Zero-shot Image Editing with Reference Imitation [50.75310094611476]
We present a new form of editing, termed imitative editing, to help users exercise their creativity more conveniently.
We propose a generative training framework, dubbed MimicBrush, which randomly selects two frames from a video clip, masks some regions of one frame, and learns to recover the masked regions using the information from the other frame.
We experimentally show the effectiveness of our method under various test cases as well as its superiority over existing alternatives.
arXiv Detail & Related papers (2024-06-11T17:59:51Z) - DiffUHaul: A Training-Free Method for Object Dragging in Images [78.93531472479202]
We propose a training-free method, dubbed DiffUHaul, for the object dragging task.
We first apply attention masking in each denoising step to make the generation more disentangled across different objects.
In the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance.
arXiv Detail & Related papers (2024-06-03T17:59:53Z) - IFSENet : Harnessing Sparse Iterations for Interactive Few-shot Segmentation Excellence [2.822194296769473]
Few-shot segmentation techniques reduce the required number of images to learn to segment a new class.
interactive segmentation techniques only focus on incrementally improving the segmentation of one object at a time.
We combine the two concepts to drastically reduce the effort required to train segmentation models for novel classes.
arXiv Detail & Related papers (2024-03-22T10:15:53Z) - QIS : Interactive Segmentation via Quasi-Conformal Mappings [3.096214093393036]
We propose a quasi-conformal interactive segmentation (QIS) model, which incorporates user input in the form of positive and negative clicks.
We provide a thorough analysis of the proposed model, including theoretical support for the ability of QIS to include or exclude regions of interest.
arXiv Detail & Related papers (2024-02-22T16:49:58Z) - A Simple and Effective Use of Object-Centric Images for Long-Tailed
Object Detection [56.82077636126353]
We take advantage of object-centric images to improve object detection in scene-centric images.
We present a simple yet surprisingly effective framework to do so.
Our approach can improve the object detection (and instance segmentation) accuracy of rare objects by 50% (and 33%) relatively.
arXiv Detail & Related papers (2021-02-17T17:27:21Z) - Self-supervised Segmentation via Background Inpainting [96.10971980098196]
We introduce a self-supervised detection and segmentation approach that can work with single images captured by a potentially moving camera.
We exploit a self-supervised loss function that we exploit to train a proposal-based segmentation network.
We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks and outperform existing self-supervised methods.
arXiv Detail & Related papers (2020-11-11T08:34:40Z) - Localized Interactive Instance Segmentation [24.55415554455844]
We propose a clicking scheme wherein user interactions are restricted to the proximity of the object.
We demonstrate the effectiveness of our proposed clicking scheme and localization strategy through detailed experimentation.
arXiv Detail & Related papers (2020-10-18T23:24:09Z) - Look here! A parametric learning based approach to redirect visual
attention [49.609412873346386]
We introduce an automatic method to make an image region more attention-capturing via subtle image edits.
Our model predicts a distinct set of global parametric transformations to be applied to the foreground and background image regions.
Our edits enable inference at interactive rates on any image size, and easily generalize to videos.
arXiv Detail & Related papers (2020-08-12T16:08:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.