Multi-stage Progressive Reasoning for Dunhuang Murals Inpainting
- URL: http://arxiv.org/abs/2305.05902v1
- Date: Wed, 10 May 2023 05:10:00 GMT
- Title: Multi-stage Progressive Reasoning for Dunhuang Murals Inpainting
- Authors: Wenjie Liu, Baokai Liu, Shiqiang Du, Yuqing Shi, Jiacheng Li, and
Jianhua Wang
- Abstract summary: Dunhuang murals suffer from fading, breakage, surface brittleness and extensive peeling affected by prolonged environmental erosion.
In this paper, we design a multi-stage progressive reasoning network (MPR-Net) containing global to local receptive fields for murals inpainting.
Our method has been evaluated through both qualitative and quantitative experiments, and the results demonstrate that it outperforms state-of-the-art image inpainting methods.
- Score: 5.167943379184235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dunhuang murals suffer from fading, breakage, surface brittleness and
extensive peeling affected by prolonged environmental erosion. Image inpainting
techniques are widely used in the field of digital mural inpainting. Generally
speaking, for mural inpainting tasks with large area damage, it is challenging
for any image inpainting method. In this paper, we design a multi-stage
progressive reasoning network (MPR-Net) containing global to local receptive
fields for murals inpainting. This network is capable of recursively inferring
the damage boundary and progressively tightening the regional texture
constraints. Moreover, to adaptively fuse plentiful information at various
scales of murals, a multi-scale feature aggregation module (MFA) is designed to
empower the capability to select the significant features. The execution of the
model is similar to the process of a mural restorer (i.e., inpainting the
structure of the damaged mural globally first and then adding the local texture
details further). Our method has been evaluated through both qualitative and
quantitative experiments, and the results demonstrate that it outperforms
state-of-the-art image inpainting methods.
Related papers
- Neural-Polyptych: Content Controllable Painting Recreation for Diverse Genres [30.83874057768352]
We present a unified framework, Neural-Polyptych, to facilitate the creation of expansive, high-resolution paintings.
We have designed a multi-scale GAN-based architecture to decompose the generation process into two parts.
We validate our approach to diverse genres of both Eastern and Western paintings.
arXiv Detail & Related papers (2024-09-29T12:46:00Z) - ARIN: Adaptive Resampling and Instance Normalization for Robust Blind
Inpainting of Dunhuang Cave Paintings [51.36804225712579]
In this work, we tackle a real-world setting: inpainting of images from Dunhuang caves.
The Dunhuang dataset consists of murals, half of which suffer from corrosion and aging.
We modify two different existing methods that are based upon state-of-the-art (SOTA) super resolution and deblurring networks.
We show that those can successfully inpaint and enhance these deteriorated cave paintings.
arXiv Detail & Related papers (2024-02-25T20:27:20Z) - HD-Painter: High-Resolution and Prompt-Faithful Text-Guided Image Inpainting with Diffusion Models [59.01600111737628]
HD-Painter is a training free approach that accurately follows prompts and coherently scales to high resolution image inpainting.
To this end, we design the Prompt-Aware Introverted Attention (PAIntA) layer enhancing self-attention scores.
Our experiments demonstrate that HD-Painter surpasses existing state-of-the-art approaches quantitatively and qualitatively.
arXiv Detail & Related papers (2023-12-21T18:09:30Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - Dunhuang murals contour generation network based on convolution and
self-attention fusion [3.118384520557952]
We propose a novel edge detector based on self-attention combined with convolution to generate line drawings of Dunhuang murals.
Compared with existing edge detection methods, firstly, a new residual self-attention and convolution mixed module (Ramix) is proposed to fuse local and global features in feature maps.
arXiv Detail & Related papers (2022-12-02T02:47:30Z) - Line Drawing Guided Progressive Inpainting of Mural Damage [18.768636785377645]
We propose a line drawing guided progressive mural inpainting method.
It divides the inpainting process into two steps: structure reconstruction and color correction.
The proposed approach is evaluated against the current state-of-the-art image inpainting methods.
arXiv Detail & Related papers (2022-11-12T12:22:11Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z) - Cylin-Painting: Seamless {360\textdegree} Panoramic Image Outpainting
and Beyond [136.18504104345453]
We present a Cylin-Painting framework that involves meaningful collaborations between inpainting and outpainting.
The proposed algorithm can be effectively extended to other panoramic vision tasks, such as object detection, depth estimation, and image super-resolution.
arXiv Detail & Related papers (2022-04-18T21:18:49Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z) - DeepGIN: Deep Generative Inpainting Network for Extreme Image Inpainting [45.39552853543588]
We propose a deep generative inpainting network, named DeepGIN, to handle various types of masked images.
Our model is capable of completing masked images in the wild.
arXiv Detail & Related papers (2020-08-17T09:30:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.