Deep Image Compositing
- URL: http://arxiv.org/abs/2103.15446v1
- Date: Mon, 29 Mar 2021 09:23:37 GMT
- Title: Deep Image Compositing
- Authors: Shivangi Aneja and Soham Mazumder
- Abstract summary: In image editing, the most common task is pasting objects from one image to the other and then adjusting the manifestation of the foreground object with the background object.
To achieve this, we are using Generative Adversarial Networks (GANS)
GANS is able to decode the color histogram of the foreground and background part of the image and also learns to blend the foreground object with the background.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In image editing, the most common task is pasting objects from one image to
the other and then eventually adjusting the manifestation of the foreground
object with the background object. This task is called image compositing. But
image compositing is a challenging problem that requires professional editing
skills and a considerable amount of time. Not only these professionals are
expensive to hire, but the tools (like Adobe Photoshop) used for doing such
tasks are also expensive to purchase making the overall task of image
compositing difficult for people without this skillset. In this work, we aim to
cater to this problem by making composite images look realistic. To achieve
this, we are using Generative Adversarial Networks (GANS). By training the
network with a diverse range of filters applied to the images and special loss
functions, the model is able to decode the color histogram of the foreground
and background part of the image and also learns to blend the foreground object
with the background. The hue and saturation values of the image play an
important role as discussed in this paper. To the best of our knowledge, this
is the first work that uses GANs for the task of image compositing. Currently,
there is no benchmark dataset available for image compositing. So we created
the dataset and will also make the dataset publicly available for benchmarking.
Experimental results on this dataset show that our method outperforms all
current state-of-the-art methods.
Related papers
- DESOBAv2: Towards Large-scale Real-world Dataset for Shadow Generation [19.376935979734714]
In this work, we focus on generating plausible shadow for the inserted foreground object to make the composite image more realistic.
To supplement the existing small-scale dataset DESOBA, we create a large-scale dataset called DESOBAv2.
arXiv Detail & Related papers (2023-08-19T10:21:23Z) - Foreground Object Search by Distilling Composite Image Feature [15.771802337102837]
Foreground object search (FOS) aims to find compatible foreground objects for a given background image.
We observe that competitive retrieval performance could be achieved by using a discriminator to predict the compatibility of composite image.
We propose a novel FOS method via distilling composite feature (DiscoFOS)
arXiv Detail & Related papers (2023-08-09T14:43:10Z) - Scrape, Cut, Paste and Learn: Automated Dataset Generation Applied to
Parcel Logistics [58.720142291102135]
We present a fully automated pipeline to generate a synthetic dataset for instance segmentation in four steps.
We first scrape images for the objects of interest from popular image search engines.
We compare three different methods for image selection: Object-agnostic pre-processing, manual image selection and CNN-based image selection.
arXiv Detail & Related papers (2022-10-18T12:49:04Z) - Shape-guided Object Inpainting [84.18768707298105]
This work studies a new image inpainting task, i.e. shape-guided object inpainting.
We propose a new data preparation method and a novel Contextual Object Generator (CogNet) for the object inpainting task.
Experiments demonstrate that the proposed method can generate realistic objects that fit the context in terms of both visual appearance and semantic meanings.
arXiv Detail & Related papers (2022-04-16T17:19:11Z) - Learning Co-segmentation by Segment Swapping for Retrieval and Discovery [67.6609943904996]
The goal of this work is to efficiently identify visually similar patterns from a pair of images.
We generate synthetic training pairs by selecting object segments in an image and copy-pasting them into another image.
We show our approach provides clear improvements for artwork details retrieval on the Brueghel dataset.
arXiv Detail & Related papers (2021-10-29T16:51:16Z) - Making Images Real Again: A Comprehensive Survey on Deep Image Composition [34.09380539557308]
Image composition task could be into multiple sub-tasks, in which each sub-task targets at one or more issues.
In this paper, we conduct comprehensive survey over the sub-tasks and blending of image composition.
For each one, we summarize the existing methods, available datasets, and common evaluation metrics.
arXiv Detail & Related papers (2021-06-28T09:09:14Z) - Deep Image Compositing [93.75358242750752]
We propose a new method which can automatically generate high-quality image composites without any user input.
Inspired by Laplacian pyramid blending, a dense-connected multi-stream fusion network is proposed to effectively fuse the information from the foreground and background images.
Experiments show that the proposed method can automatically generate high-quality composites and outperforms existing methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-11-04T06:12:24Z) - Bridging Composite and Real: Towards End-to-end Deep Image Matting [88.79857806542006]
We study the roles of semantics and details for image matting.
We propose a novel Glance and Focus Matting network (GFM), which employs a shared encoder and two separate decoders.
Comprehensive empirical studies have demonstrated that GFM outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-30T10:57:13Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.