Dunhuang murals contour generation network based on convolution and
self-attention fusion
- URL: http://arxiv.org/abs/2212.00935v1
- Date: Fri, 2 Dec 2022 02:47:30 GMT
- Title: Dunhuang murals contour generation network based on convolution and
self-attention fusion
- Authors: Baokai Liu, Fengjie He, Shiqiang Du, Kaiwu Zhang, Jianhua Wang
- Abstract summary: We propose a novel edge detector based on self-attention combined with convolution to generate line drawings of Dunhuang murals.
Compared with existing edge detection methods, firstly, a new residual self-attention and convolution mixed module (Ramix) is proposed to fuse local and global features in feature maps.
- Score: 3.118384520557952
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dunhuang murals are a collection of Chinese style and national style, forming
a self-contained Chinese-style Buddhist art. It has very high historical and
cultural value and research significance. Among them, the lines of Dunhuang
murals are highly general and expressive. It reflects the character's
distinctive character and complex inner emotions. Therefore, the outline
drawing of murals is of great significance to the research of Dunhuang Culture.
The contour generation of Dunhuang murals belongs to image edge detection,
which is an important branch of computer vision, aims to extract salient
contour information in images. Although convolution-based deep learning
networks have achieved good results in image edge extraction by exploring the
contextual and semantic features of images. However, with the enlargement of
the receptive field, some local detail information is lost. This makes it
impossible for them to generate reasonable outline drawings of murals. In this
paper, we propose a novel edge detector based on self-attention combined with
convolution to generate line drawings of Dunhuang murals. Compared with
existing edge detection methods, firstly, a new residual self-attention and
convolution mixed module (Ramix) is proposed to fuse local and global features
in feature maps. Secondly, a novel densely connected backbone extraction
network is designed to efficiently propagate rich edge feature information from
shallow layers into deep layers. Compared with existing methods, it is shown on
different public datasets that our method is able to generate sharper and
richer edge maps. In addition, testing on the Dunhuang mural dataset shows that
our method can achieve very competitive performance.
Related papers
- ARIN: Adaptive Resampling and Instance Normalization for Robust Blind
Inpainting of Dunhuang Cave Paintings [51.36804225712579]
In this work, we tackle a real-world setting: inpainting of images from Dunhuang caves.
The Dunhuang dataset consists of murals, half of which suffer from corrosion and aging.
We modify two different existing methods that are based upon state-of-the-art (SOTA) super resolution and deblurring networks.
We show that those can successfully inpaint and enhance these deteriorated cave paintings.
arXiv Detail & Related papers (2024-02-25T20:27:20Z) - Multi-stage Progressive Reasoning for Dunhuang Murals Inpainting [5.167943379184235]
Dunhuang murals suffer from fading, breakage, surface brittleness and extensive peeling affected by prolonged environmental erosion.
In this paper, we design a multi-stage progressive reasoning network (MPR-Net) containing global to local receptive fields for murals inpainting.
Our method has been evaluated through both qualitative and quantitative experiments, and the results demonstrate that it outperforms state-of-the-art image inpainting methods.
arXiv Detail & Related papers (2023-05-10T05:10:00Z) - Location-Free Camouflage Generation Network [82.74353843283407]
Camouflage is a common visual phenomenon, which refers to hiding the foreground objects into the background images, making them briefly invisible to the human eye.
This paper proposes a novel Location-free Camouflage Generation Network (LCG-Net) that fuse high-level features of foreground and background image, and generate result by one inference.
Experiments show that our method has results as satisfactory as state-of-the-art in the single-appearance regions and are less likely to be completely invisible, but far exceed the quality of the state-of-the-art in the multi-appearance regions.
arXiv Detail & Related papers (2022-03-18T10:33:40Z) - JPGNet: Joint Predictive Filtering and Generative Network for Image
Inpainting [21.936689731138213]
Image inpainting aims to restore the missing regions and make the recovery results identical to the originally complete image.
Existing works usually regard it as a pure generation problem and employ cutting-edge generative techniques to address it.
In this paper, we formulate image inpainting as a mix of two problems, i.e., predictive filtering and deep generation.
arXiv Detail & Related papers (2021-07-09T07:49:52Z) - Noise Doesn't Lie: Towards Universal Detection of Deep Inpainting [42.189768203036394]
We make the first attempt towards universal detection of deep inpainting, where the detection network can generalize well.
Our approach outperforms existing detection methods by a large margin and generalizes well to unseen deep inpainting techniques.
arXiv Detail & Related papers (2021-06-03T01:29:29Z) - Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality
Artistic Style Transfer [115.13853805292679]
Artistic style transfer aims at migrating the style from an example image to a content image.
Inspired by the common painting process of drawing a draft and revising the details, we introduce a novel feed-forward method named Laplacian Pyramid Network (LapStyle)
Our method can synthesize high quality stylized images in real time, where holistic style patterns are properly transferred.
arXiv Detail & Related papers (2021-04-12T11:53:53Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - Texture Memory-Augmented Deep Patch-Based Image Inpainting [121.41395272974611]
We propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions.
The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network.
The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks.
arXiv Detail & Related papers (2020-09-28T12:09:08Z) - Very Long Natural Scenery Image Prediction by Outpainting [96.8509015981031]
Outpainting receives less attention due to two challenges in it.
First challenge is how to keep the spatial and content consistency between generated images and original input.
Second challenge is how to maintain high quality in generated results.
arXiv Detail & Related papers (2019-12-29T16:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.