Screentone-Aware Manga Super-Resolution Using DeepLearning
- URL: http://arxiv.org/abs/2305.08325v1
- Date: Mon, 15 May 2023 03:24:36 GMT
- Title: Screentone-Aware Manga Super-Resolution Using DeepLearning
- Authors: Chih-Yuan Yao, Husan-Ting Chou, Yu-Sheng Lin, Kuo-wei Chen
- Abstract summary: High-quality images can hinder transmission and affect the viewing experience.
Traditional vectorization methods require a significant amount of manual parameter adjustment to process screentone.
Super-resolution can convert low-resolution images to high-resolution images while maintaining low transmission rates and providing high-quality results.
- Score: 3.0638744222997034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Manga, as a widely beloved form of entertainment around the world, have
shifted from paper to electronic screens with the proliferation of handheld
devices. However, as the demand for image quality increases with screen
development, high-quality images can hinder transmission and affect the viewing
experience. Traditional vectorization methods require a significant amount of
manual parameter adjustment to process screentone. Using deep learning, lines
and screentone can be automatically extracted and image resolution can be
enhanced. Super-resolution can convert low-resolution images to high-resolution
images while maintaining low transmission rates and providing high-quality
results. However, traditional Super Resolution methods for improving manga
resolution do not consider the meaning of screentone density, resulting in
changes to screentone density and loss of meaning. In this paper, we aims to
address this issue by first classifying the regions and lines of different
screentone in the manga using deep learning algorithm, then using corresponding
super-resolution models for quality enhancement based on the different
classifications of each block, and finally combining them to obtain images that
maintain the meaning of screentone and lines in the manga while improving image
resolution.
Related papers
- Sketch2Manga: Shaded Manga Screening from Sketch with Diffusion Models [26.010509997863196]
We propose a novel sketch-to-manga framework that first generates a color illustration from the sketch and then generates a screentoned manga.
Our method significantly outperforms existing methods in generating high-quality manga with shaded high-frequency screentones.
arXiv Detail & Related papers (2024-03-13T05:33:52Z) - EXTRACTER: Efficient Texture Matching with Attention and Gradient
Enhancing for Large Scale Image Super Resolution [0.0]
Recent Reference-Based image super-resolution (RefSR) has improved SOTA deep methods introducing attention mechanisms to enhance low-resolution images.
We propose a deep search with a more efficient memory usage that reduces significantly the number of image patches.
arXiv Detail & Related papers (2023-10-02T17:41:56Z) - Towards Robust Scene Text Image Super-resolution via Explicit Location
Enhancement [59.66539728681453]
Scene text image super-resolution (STISR) aims to improve image quality while boosting downstream scene text recognition accuracy.
Most existing methods treat the foreground (character regions) and background (non-character regions) equally in the forward process.
We propose a novel method LEMMA that explicitly models character regions to produce high-level text-specific guidance for super-resolution.
arXiv Detail & Related papers (2023-07-19T05:08:47Z) - Manga Rescreening with Interpretable Screentone Representation [21.638561901817866]
The process of adapting or repurposing manga pages is a time-consuming task that requires manga artists to manually work on every single screentone region.
We propose an automatic manga rescreening pipeline that aims to minimize the human effort involved in manga adaptation.
Our pipeline automatically recognizes screentone regions and generates novel screentones with newly specified characteristics.
arXiv Detail & Related papers (2023-06-07T02:55:09Z) - Any-resolution Training for High-resolution Image Synthesis [55.19874755679901]
Generative models operate at fixed resolution, even though natural images come in a variety of sizes.
We argue that every pixel matters and create datasets with variable-size images, collected at their native resolutions.
We introduce continuous-scale training, a process that samples patches at random scales to train a new generator with variable output resolutions.
arXiv Detail & Related papers (2022-04-14T17:59:31Z) - Screentone-Preserved Manga Retargeting [27.415654292345355]
We propose a method that synthesizes a rescaled manga image while retaining the screentone in each screened region.
The rescaled manga shares the same region-wise screentone correspondences with the original manga, which enables us to simplify the screentone problem.
arXiv Detail & Related papers (2022-03-07T13:48:15Z) - Exploiting Aliasing for Manga Restoration [14.978972444431832]
We propose an innovative two-stage method to restore quality bitonal manga from degraded ones.
First, we predict the target resolution from the degraded manga via the Scale Estimation Network (SE-Net)
Then, at the target resolution, we restore the region-wise bitonal screentones via the Manga Restoration Network (MR-Net)
arXiv Detail & Related papers (2021-05-14T13:47:04Z) - Semantic Layout Manipulation with High-Resolution Sparse Attention [106.59650698907953]
We tackle the problem of semantic image layout manipulation, which aims to manipulate an input image by editing its semantic label map.
A core problem of this task is how to transfer visual details from the input images to the new semantic layout while making the resulting image visually realistic.
We propose a high-resolution sparse attention module that effectively transfers visual details to new layouts at a resolution up to 512x512.
arXiv Detail & Related papers (2020-12-14T06:50:43Z) - Multi-Density Sketch-to-Image Translation Network [65.4028451067947]
We propose the first multi-level density sketch-to-image translation framework, which allows the input sketch to cover a wide range from rough object outlines to micro structures.
Our method has been successfully verified on various datasets for different applications including face editing, multi-modal sketch-to-photo translation, and anime colorization.
arXiv Detail & Related papers (2020-06-18T16:21:04Z) - Unsupervised Real Image Super-Resolution via Generative Variational
AutoEncoder [47.53609520395504]
We revisit the classic example based image super-resolution approaches and come up with a novel generative model for perceptual image super-resolution.
We propose a joint image denoising and super-resolution model via Variational AutoEncoder.
With the aid of the discriminator, an additional overhead of super-resolution subnetwork is attached to super-resolve the denoised image with photo-realistic visual quality.
arXiv Detail & Related papers (2020-04-27T13:49:36Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.