Exploiting Aliasing for Manga Restoration
- URL: http://arxiv.org/abs/2105.06830v1
- Date: Fri, 14 May 2021 13:47:04 GMT
- Title: Exploiting Aliasing for Manga Restoration
- Authors: Minshan Xie, Menghan Xia, Tien-Tsin Wong
- Abstract summary: We propose an innovative two-stage method to restore quality bitonal manga from degraded ones.
First, we predict the target resolution from the degraded manga via the Scale Estimation Network (SE-Net)
Then, at the target resolution, we restore the region-wise bitonal screentones via the Manga Restoration Network (MR-Net)
- Score: 14.978972444431832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a popular entertainment art form, manga enriches the line drawings details
with bitonal screentones. However, manga resources over the Internet usually
show screentone artifacts because of inappropriate scanning/rescaling
resolution. In this paper, we propose an innovative two-stage method to restore
quality bitonal manga from degraded ones. Our key observation is that the
aliasing induced by downsampling bitonal screentones can be utilized as
informative clues to infer the original resolution and screentones. First, we
predict the target resolution from the degraded manga via the Scale Estimation
Network (SE-Net) with spatial voting scheme. Then, at the target resolution, we
restore the region-wise bitonal screentones via the Manga Restoration Network
(MR-Net) discriminatively, depending on the degradation degree. Specifically,
the original screentones are directly restored in pattern-identifiable regions,
and visually plausible screentones are synthesized in pattern-agnostic regions.
Quantitative evaluation on synthetic data and visual assessment on real-world
cases illustrate the effectiveness of our method.
Related papers
- Sketch2Manga: Shaded Manga Screening from Sketch with Diffusion Models [26.010509997863196]
We propose a novel sketch-to-manga framework that first generates a color illustration from the sketch and then generates a screentoned manga.
Our method significantly outperforms existing methods in generating high-quality manga with shaded high-frequency screentones.
arXiv Detail & Related papers (2024-03-13T05:33:52Z) - APISR: Anime Production Inspired Real-World Anime Super-Resolution [15.501488335115269]
We argue that video networks and datasets are not necessary for anime SR due to the repetition use of hand-drawing frames.
Instead, we propose an anime image collection pipeline by choosing the least compressed and the most informative frames from the video sources.
We evaluate our method through extensive experiments on the public benchmark, showing our method outperforms state-of-the-art anime dataset-trained approaches.
arXiv Detail & Related papers (2024-03-03T19:52:43Z) - Scenimefy: Learning to Craft Anime Scene via Semi-Supervised
Image-to-Image Translation [75.91455714614966]
We propose Scenimefy, a novel semi-supervised image-to-image translation framework.
Our approach guides the learning with structure-consistent pseudo paired data.
A patch-wise contrastive style loss is introduced to improve stylization and fine details.
arXiv Detail & Related papers (2023-08-24T17:59:50Z) - Manga Rescreening with Interpretable Screentone Representation [21.638561901817866]
The process of adapting or repurposing manga pages is a time-consuming task that requires manga artists to manually work on every single screentone region.
We propose an automatic manga rescreening pipeline that aims to minimize the human effort involved in manga adaptation.
Our pipeline automatically recognizes screentone regions and generates novel screentones with newly specified characteristics.
arXiv Detail & Related papers (2023-06-07T02:55:09Z) - Screentone-Aware Manga Super-Resolution Using DeepLearning [3.0638744222997034]
High-quality images can hinder transmission and affect the viewing experience.
Traditional vectorization methods require a significant amount of manual parameter adjustment to process screentone.
Super-resolution can convert low-resolution images to high-resolution images while maintaining low transmission rates and providing high-quality results.
arXiv Detail & Related papers (2023-05-15T03:24:36Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Screentone-Preserved Manga Retargeting [27.415654292345355]
We propose a method that synthesizes a rescaled manga image while retaining the screentone in each screened region.
The rescaled manga shares the same region-wise screentone correspondences with the original manga, which enables us to simplify the screentone problem.
arXiv Detail & Related papers (2022-03-07T13:48:15Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - ReGO: Reference-Guided Outpainting for Scenery Image [82.21559299694555]
generative adversarial learning has advanced the image outpainting by producing semantic consistent content for the given image.
This work investigates a principle way to synthesize texture-rich results by borrowing pixels from its neighbors.
To prevent the style of the generated part from being affected by the reference images, a style ranking loss is proposed to augment the ReGO to synthesize style-consistent results.
arXiv Detail & Related papers (2021-06-20T02:34:55Z) - High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling [122.06593036862611]
Existing image inpainting methods often produce artifacts when dealing with large holes in real applications.
We propose an iterative inpainting method with a feedback mechanism.
Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-05-24T13:23:45Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.