Deep Geometrized Cartoon Line Inbetweening
- URL: http://arxiv.org/abs/2309.16643v1
- Date: Thu, 28 Sep 2023 17:50:05 GMT
- Title: Deep Geometrized Cartoon Line Inbetweening
- Authors: Li Siyao, Tianpei Gu, Weiye Xiao, Henghui Ding, Ziwei Liu, Chen Change
Loy
- Abstract summary: Inbetweening involves generating intermediate frames between two black-and-white line drawings.
Existing frame methods that rely on matching and warping whole images are unsuitable for line inbetweening.
We propose AnimeInbet, which geometrizes geometric line drawings into endpoints and reframes the inbetweening task as a graph fusion problem.
Our method can effectively capture the sparsity and unique structure of line drawings while preserving the details during inbetweening.
- Score: 98.35956631655357
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We aim to address a significant but understudied problem in the anime
industry, namely the inbetweening of cartoon line drawings. Inbetweening
involves generating intermediate frames between two black-and-white line
drawings and is a time-consuming and expensive process that can benefit from
automation. However, existing frame interpolation methods that rely on matching
and warping whole raster images are unsuitable for line inbetweening and often
produce blurring artifacts that damage the intricate line structures. To
preserve the precision and detail of the line drawings, we propose a new
approach, AnimeInbet, which geometrizes raster line drawings into graphs of
endpoints and reframes the inbetweening task as a graph fusion problem with
vertex repositioning. Our method can effectively capture the sparsity and
unique structure of line drawings while preserving the details during
inbetweening. This is made possible via our novel modules, i.e., vertex
geometric embedding, a vertex correspondence Transformer, an effective
mechanism for vertex repositioning and a visibility predictor. To train our
method, we introduce MixamoLine240, a new dataset of line drawings with ground
truth vectorization and matching labels. Our experiments demonstrate that
AnimeInbet synthesizes high-quality, clean, and complete intermediate line
drawings, outperforming existing methods quantitatively and qualitatively,
especially in cases with large motions. Data and code are available at
https://github.com/lisiyao21/AnimeInbet.
Related papers
- Thin-Plate Spline-based Interpolation for Animation Line Inbetweening [54.69811179222127]
Chamfer Distance (CD) is commonly adopted for evaluating inbetweening performance.
We propose a simple yet effective method for animation line inbetweening that adopts thin-plate spline-based transformation.
Our method outperforms existing approaches by delivering high-quality results with enhanced fluidity.
arXiv Detail & Related papers (2024-08-17T08:05:31Z) - Learning Inclusion Matching for Animation Paint Bucket Colorization [76.4507878427755]
We introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments.
Our method features a two-stage pipeline that integrates a coarse color warping module with an inclusion matching module.
To facilitate the training of our network, we also develope a unique dataset, referred to as PaintBucket-Character.
arXiv Detail & Related papers (2024-03-27T08:32:48Z) - Bridging the Gap: Sketch-Aware Interpolation Network for High-Quality Animation Sketch Inbetweening [58.09847349781176]
We propose a novel deep learning method - Sketch-Aware Interpolation Network (SAIN)
This approach incorporates multi-level guidance that formulates region-level correspondence, stroke-level correspondence and pixel-level dynamics.
A multi-stream U-Transformer is then devised to characterize sketch inbetweening patterns using these multi-level guides through the integration of self / cross-attention mechanisms.
arXiv Detail & Related papers (2023-08-25T09:51:03Z) - GlueStick: Robust Image Matching by Sticking Points and Lines Together [64.18659491529382]
This paper introduces a new matching paradigm, where points, lines and descriptors are unified into a single wireframe structure.
We show that our strategy outperforms that of other matching approaches independently.
arXiv Detail & Related papers (2023-04-04T17:58:14Z) - Linking Sketch Patches by Learning Synonymous Proximity for Graphic
Sketch Representation [8.19063619210761]
We propose an order-invariant, semantics-aware method for graphic sketch representations.
The cropped sketch patches are linked according to their global semantics or local geometric shapes, namely the synonymous proximity.
We show that our method significantly improves the performance on both controllable sketch synthesis and sketch healing.
arXiv Detail & Related papers (2022-11-30T09:28:15Z) - Learning to generate line drawings that convey geometry and semantics [22.932131011984513]
This paper presents an unpaired method for creating line drawings from photographs.
We observe that line drawings are encodings of scene information and seek to convey 3D shape and semantic meaning.
We introduce a geometry loss which predicts depth information from the image features of a line drawing, and a semantic loss which matches the CLIP features of a line drawing with its corresponding photograph.
arXiv Detail & Related papers (2022-03-23T19:27:41Z) - Deep Animation Video Interpolation in the Wild [115.24454577119432]
In this work, we formally define and study the animation video code problem for the first time.
We propose an effective framework, AnimeInterp, with two dedicated modules in a coarse-to-fine manner.
Notably, AnimeInterp shows favorable perceptual quality and robustness for animation scenarios in the wild.
arXiv Detail & Related papers (2021-04-06T13:26:49Z) - Stylized Neural Painting [0.0]
This paper proposes an image-to-painting translation method that generates vivid and realistic painting artworks with controllable styles.
Experiments show that the paintings generated by our method have a high degree of fidelity in both global appearance and local textures.
arXiv Detail & Related papers (2020-11-16T17:24:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.