Towards Seamless Borders: A Method for Mitigating Inconsistencies in Image Inpainting and Outpainting
- URL: http://arxiv.org/abs/2506.12530v1
- Date: Sat, 14 Jun 2025 15:02:56 GMT
- Title: Towards Seamless Borders: A Method for Mitigating Inconsistencies in Image Inpainting and Outpainting
- Authors: Xingzhong Hou, Jie Wu, Boxiao Liu, Yi Zhang, Guanglu Song, Yunpeng Liu, Yu Liu, Haihang You,
- Abstract summary: We propose two novel methods to address discrepancy issues in diffusion-based inpainting models.<n>First, we introduce a modified Variational Autoencoder that corrects color imbalances, ensuring that the final inpainted results are free of color mismatches.<n>Second, we propose a two-step training strategy that improves the blending of generated and existing image content during the diffusion process.
- Score: 22.46566055053259
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image inpainting is the task of reconstructing missing or damaged parts of an image in a way that seamlessly blends with the surrounding content. With the advent of advanced generative models, especially diffusion models and generative adversarial networks, inpainting has achieved remarkable improvements in visual quality and coherence. However, achieving seamless continuity remains a significant challenge. In this work, we propose two novel methods to address discrepancy issues in diffusion-based inpainting models. First, we introduce a modified Variational Autoencoder that corrects color imbalances, ensuring that the final inpainted results are free of color mismatches. Second, we propose a two-step training strategy that improves the blending of generated and existing image content during the diffusion process. Through extensive experiments, we demonstrate that our methods effectively reduce discontinuity and produce high-quality inpainting results that are coherent and visually appealing.
Related papers
- IN2OUT: Fine-Tuning Video Inpainting Model for Video Outpainting Using Hierarchical Discriminator [3.6350564275444177]
Video outpainting presents a unique challenge of extending the borders while maintaining consistency with the given content.<n>We develop a specialized outpainting loss function that leverages both local and global features of the discriminator.<n>Our proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2025-08-01T08:15:14Z) - GuidPaint: Class-Guided Image Inpainting with Diffusion Models [1.1902474395094222]
We propose GuidPaint, a training-free, class-guided image inpainting framework.<n>We show that GuidPaint achieves clear improvements over existing context-aware inpainting methods in both qualitative and quantitative evaluations.
arXiv Detail & Related papers (2025-07-29T09:36:52Z) - HarmonPaint: Harmonized Training-Free Diffusion Inpainting [58.870763247178495]
HarmonPaint is a training-free inpainting framework that seamlessly integrates with the attention mechanisms of diffusion models.<n>By leveraging masking strategies within self-attention, HarmonPaint ensures structural fidelity without model retraining or fine-tuning.
arXiv Detail & Related papers (2025-07-22T16:14:35Z) - ESDiff: Encoding Strategy-inspired Diffusion Model with Few-shot Learning for Color Image Inpainting [5.961957277931777]
Image inpainting is a technique used to restore missing or damaged regions of an image.<n>In this paper, we propose an encoding strategy-inspired diffusion model with few-shot learning for color image inpainting.<n> Experimental results indicate that our method exceeds current techniques in quantitative metrics.
arXiv Detail & Related papers (2025-04-24T13:08:36Z) - MVIP-NeRF: Multi-view 3D Inpainting on NeRF Scenes via Diffusion Prior [65.05773512126089]
NeRF inpainting methods built upon explicit RGB and depth 2D inpainting supervisions are inherently constrained by the capabilities of their underlying 2D inpainters.
We propose MVIP-NeRF that harnesses the potential of diffusion priors for NeRF inpainting, addressing both appearance and geometry aspects.
Our experimental results show better appearance and geometry recovery than previous NeRF inpainting methods.
arXiv Detail & Related papers (2024-05-05T09:04:42Z) - Fill in the ____ (a Diffusion-based Image Inpainting Pipeline) [0.0]
Inpainting is the process of taking an image and generating lost or intentionally occluded portions.
Modern inpainting techniques have shown remarkable ability in generating sensible completions.
A critical gap in these existing models will be addressed, focusing on the ability to prompt and control what exactly is generated.
arXiv Detail & Related papers (2024-03-24T05:26:55Z) - BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed
Dual-Branch Diffusion [61.90969199199739]
BrushNet is a novel plug-and-play dual-branch model engineered to embed pixel-level masked image features into any pre-trained DM.
BrushNet's superior performance over existing models across seven key metrics, including image quality, mask region preservation, and textual coherence.
arXiv Detail & Related papers (2024-03-11T17:59:31Z) - Towards Enhanced Image Inpainting: Mitigating Unwanted Object Insertion and Preserving Color Consistency [78.0488707697235]
Post-processing approach dubbed ASUKA (Aligned Stable inpainting with UnKnown Areas prior) to improve inpainting models.<n>Masked Auto-Encoder (MAE) for reconstruction-based priors mitigates object hallucination.<n> specialized VAE decoder that treats latent-to-image decoding as a local task.
arXiv Detail & Related papers (2023-12-08T05:08:06Z) - Diverse Inpainting and Editing with GAN Inversion [4.234367850767171]
Recent inversion methods have shown that real images can be inverted into StyleGAN's latent space.
In this paper, we tackle an even more difficult task, inverting erased images into GAN's latent space for realistic inpaintings and editings.
arXiv Detail & Related papers (2023-07-27T17:41:36Z) - GRIG: Few-Shot Generative Residual Image Inpainting [27.252855062283825]
We present a novel few-shot generative residual image inpainting method that produces high-quality inpainting results.
The core idea is to propose an iterative residual reasoning method that incorporates Convolutional Neural Networks (CNNs) for feature extraction.
We also propose a novel forgery-patch adversarial training strategy to create faithful textures and detailed appearances.
arXiv Detail & Related papers (2023-04-24T12:19:06Z) - SinDiffusion: Learning a Diffusion Model from a Single Natural Image [159.4285444680301]
We present SinDiffusion, leveraging denoising diffusion models to capture internal distribution of patches from a single natural image.
It is based on two core designs. First, SinDiffusion is trained with a single model at a single scale instead of multiple models with progressive growing of scales.
Second, we identify that a patch-level receptive field of the diffusion network is crucial and effective for capturing the image's patch statistics.
arXiv Detail & Related papers (2022-11-22T18:00:03Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z) - Very Long Natural Scenery Image Prediction by Outpainting [96.8509015981031]
Outpainting receives less attention due to two challenges in it.
First challenge is how to keep the spatial and content consistency between generated images and original input.
Second challenge is how to maintain high quality in generated results.
arXiv Detail & Related papers (2019-12-29T16:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.