Image Blending Algorithm with Automatic Mask Generation
- URL: http://arxiv.org/abs/2306.05382v3
- Date: Wed, 29 Nov 2023 06:49:12 GMT
- Title: Image Blending Algorithm with Automatic Mask Generation
- Authors: Haochen Xue, Mingyu Jin, Chong Zhang, Yuxuan Huang, Qian Weng, Xiaobo
Jin
- Abstract summary: We propose a new image blending method with automatic mask generation.
It combines semantic object detection and segmentation with mask generation to achieve deep blended images.
Results on publicly available datasets show that our method outperforms other classical image blending algorithms.
- Score: 9.785996682757753
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, image blending has gained popularity for its ability to
create visually stunning content. However, the current image blending
algorithms mainly have the following problems: manually creating image blending
masks requires a lot of manpower and material resources; image blending
algorithms cannot effectively solve the problems of brightness distortion and
low resolution. To this end, we propose a new image blending method with
automatic mask generation: it combines semantic object detection and
segmentation with mask generation to achieve deep blended images based on our
proposed new saturation loss and two-stage iteration of the PAN algorithm to
fix brightness distortion and low-resolution issues. Results on publicly
available datasets show that our method outperforms other classical image
blending algorithms on various performance metrics, including PSNR and SSIM.
Related papers
- Multi-Feature Aggregation in Diffusion Models for Enhanced Face Super-Resolution [6.055006354743854]
We develop an algorithm that utilize a low-resolution image combined with features extracted from multiple low-quality images to generate a super-resolved image.
Unlike other algorithms, our approach recovers facial features without explicitly providing attribute information.
This is the first time multi-features combined with low-resolution images are used as conditioners to generate more reliable super-resolution images.
arXiv Detail & Related papers (2024-08-27T20:08:33Z) - BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed
Dual-Branch Diffusion [61.90969199199739]
BrushNet is a novel plug-and-play dual-branch model engineered to embed pixel-level masked image features into any pre-trained DM.
BrushNet's superior performance over existing models across seven key metrics, including image quality, mask region preservation, and textual coherence.
arXiv Detail & Related papers (2024-03-11T17:59:31Z) - Variance-insensitive and Target-preserving Mask Refinement for
Interactive Image Segmentation [68.16510297109872]
Point-based interactive image segmentation can ease the burden of mask annotation in applications such as semantic segmentation and image editing.
We introduce a novel method, Variance-Insensitive and Target-Preserving Mask Refinement to enhance segmentation quality with fewer user inputs.
Experiments on GrabCut, Berkeley, SBD, and DAVIS datasets demonstrate our method's state-of-the-art performance in interactive image segmentation.
arXiv Detail & Related papers (2023-12-22T02:31:31Z) - GuidedMixup: An Efficient Mixup Strategy Guided by Saliency Maps [6.396288020763144]
We propose GuidedMixup, which aims to retain the salient regions in mixup images with low computational overhead.
We develop an efficient pairing algorithm that pursues to minimize the conflict of salient regions of paired images.
Experiments on several datasets demonstrate that GuidedMixup provides a good trade-off between augmentation overhead and generalization performance.
arXiv Detail & Related papers (2023-06-29T00:55:51Z) - Improving Masked Autoencoders by Learning Where to Mask [65.89510231743692]
Masked image modeling is a promising self-supervised learning method for visual data.
We present AutoMAE, a framework that uses Gumbel-Softmax to interlink an adversarially-trained mask generator and a mask-guided image modeling process.
In our experiments, AutoMAE is shown to provide effective pretraining models on standard self-supervised benchmarks and downstream tasks.
arXiv Detail & Related papers (2023-03-12T05:28:55Z) - Barbershop: GAN-based Image Compositing using Segmentation Masks [40.85660781133709]
We present a novel solution to image blending, particularly for the problem of hairstyle transfer, based on GAN-inversion.
Our results demonstrate a significant improvement over the current state of the art in a user study, with users preferring our blending solution over 95 percent of the time.
arXiv Detail & Related papers (2021-06-02T23:20:43Z) - Image Inpainting with Edge-guided Learnable Bidirectional Attention Maps [85.67745220834718]
We present an edge-guided learnable bidirectional attention map (Edge-LBAM) for improving image inpainting of irregular holes.
Our Edge-LBAM method contains dual procedures,including structure-aware mask-updating guided by predict edges.
Extensive experiments show that our Edge-LBAM is effective in generating coherent image structures and preventing color discrepancy and blurriness.
arXiv Detail & Related papers (2021-04-25T07:25:16Z) - Bridging the Visual Gap: Wide-Range Image Blending [16.464837892640812]
We introduce an effective deep-learning model to realize wide-range image blending.
We experimentally demonstrate that our proposed method is able to produce visually appealing results.
arXiv Detail & Related papers (2021-03-28T15:07:45Z) - Deep Image Compositing [93.75358242750752]
We propose a new method which can automatically generate high-quality image composites without any user input.
Inspired by Laplacian pyramid blending, a dense-connected multi-stream fusion network is proposed to effectively fuse the information from the foreground and background images.
Experiments show that the proposed method can automatically generate high-quality composites and outperforms existing methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-11-04T06:12:24Z) - Single Image Brightening via Multi-Scale Exposure Fusion with Hybrid
Learning [48.890709236564945]
A small ISO and a small exposure time are usually used to capture an image in the back or low light conditions.
In this paper, a single image brightening algorithm is introduced to brighten such an image.
The proposed algorithm includes a unique hybrid learning framework to generate two virtual images with large exposure times.
arXiv Detail & Related papers (2020-07-04T08:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.