Bridging the Gap: Fine-to-Coarse Sketch Interpolation Network for
High-Quality Animation Sketch Inbetweening
- URL: http://arxiv.org/abs/2308.13273v1
- Date: Fri, 25 Aug 2023 09:51:03 GMT
- Title: Bridging the Gap: Fine-to-Coarse Sketch Interpolation Network for
High-Quality Animation Sketch Inbetweening
- Authors: Jiaming Shen, Kun Hu, Wei Bao, Chang Wen Chen, Zhiyong Wang
- Abstract summary: Fine-to-Co-arse Interpolation Network (FC-SIN) is proposed to overcome sketch inbetweening issues.
FC-SIN incorporates multi-level guidance that formulates region-level correspondence, sketch-level correspondence and pixel-level dynamics.
We constructed a large-scale dataset - STD-12K, comprising 30 sketch animation series in diverse artistic styles.
- Score: 62.33071223229861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The 2D animation workflow is typically initiated with the creation of
keyframes using sketch-based drawing. Subsequent inbetweens (i.e., intermediate
sketch frames) are crafted through manual interpolation for smooth animations,
which is a labor-intensive process. Thus, the prospect of automatic animation
sketch interpolation has become highly appealing. However, existing video
interpolation methods are generally hindered by two key issues for sketch
inbetweening: 1) limited texture and colour details in sketches, and 2)
exaggerated alterations between two sketch keyframes. To overcome these issues,
we propose a novel deep learning method, namely Fine-to-Coarse Sketch
Interpolation Network (FC-SIN). This approach incorporates multi-level guidance
that formulates region-level correspondence, sketch-level correspondence and
pixel-level dynamics. A multi-stream U-Transformer is then devised to
characterize sketch inbewteening patterns using these multi-level guides
through the integration of both self-attention and cross-attention mechanisms.
Additionally, to facilitate future research on animation sketch inbetweening,
we constructed a large-scale dataset - STD-12K, comprising 30 sketch animation
series in diverse artistic styles. Comprehensive experiments on this dataset
convincingly show that our proposed FC-SIN surpasses the state-of-the-art
interpolation methods. Our code and dataset will be publicly available.
Related papers
- SketchTriplet: Self-Supervised Scenarized Sketch-Text-Image Triplet Generation [6.39528707908268]
There continues to be a lack of large-scale paired datasets for scene sketches.
We propose a self-supervised method for scene sketch generation that does not rely on any existing scene sketch.
We contribute a large-scale dataset centered around scene sketches, comprising highly semantically consistent "text-sketch-image" triplets.
arXiv Detail & Related papers (2024-05-29T06:43:49Z) - Sketch Video Synthesis [52.134906766625164]
We propose a novel framework for sketching videos represented by the frame-wise B'ezier curve.
Our method unlocks applications in sketch-based video editing and video doodling, enabled through video composition.
arXiv Detail & Related papers (2023-11-26T14:14:04Z) - Deep Geometrized Cartoon Line Inbetweening [98.35956631655357]
Inbetweening involves generating intermediate frames between two black-and-white line drawings.
Existing frame methods that rely on matching and warping whole images are unsuitable for line inbetweening.
We propose AnimeInbet, which geometrizes geometric line drawings into endpoints and reframes the inbetweening task as a graph fusion problem.
Our method can effectively capture the sparsity and unique structure of line drawings while preserving the details during inbetweening.
arXiv Detail & Related papers (2023-09-28T17:50:05Z) - I Know What You Draw: Learning Grasp Detection Conditioned on a Few
Freehand Sketches [74.63313641583602]
We propose a method to generate a potential grasp configuration relevant to the sketch-depicted objects.
Our model is trained and tested in an end-to-end manner which is easy to be implemented in real-world applications.
arXiv Detail & Related papers (2022-05-09T04:23:36Z) - FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in
Context [112.07988211268612]
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO.
Our dataset comprises 10,000 freehand scene vector sketches with per point space-time information by 100 non-expert individuals.
We study for the first time the problem of the fine-grained image retrieval from freehand scene sketches and sketch captions.
arXiv Detail & Related papers (2022-03-04T03:00:51Z) - Improving the Perceptual Quality of 2D Animation Interpolation [37.04208600867858]
Traditional 2D animation is labor-intensive, often requiring animators to draw twelve illustrations per second of movement.
Lower framerates result in larger displacements and occlusions, discrete perceptual elements (e.g. lines and solid-color regions) pose difficulties for texture-oriented convolutional networks.
Previous work tried addressing these issues, but used unscalable methods and focused on pixel-perfect performance.
We build a scalable system more appropriately centered on perceptual quality for this artistic domain.
arXiv Detail & Related papers (2021-11-24T20:51:29Z) - Deep Animation Video Interpolation in the Wild [115.24454577119432]
In this work, we formally define and study the animation video code problem for the first time.
We propose an effective framework, AnimeInterp, with two dedicated modules in a coarse-to-fine manner.
Notably, AnimeInterp shows favorable perceptual quality and robustness for animation scenarios in the wild.
arXiv Detail & Related papers (2021-04-06T13:26:49Z) - Deep Sketch-guided Cartoon Video Inbetweening [24.00033622396297]
We propose a framework to produce cartoon videos by fetching the color information from two inputs while following the animated motion guided by a user sketch.
By explicitly considering the correspondence between frames and the sketch, we can achieve higher quality results than other image synthesis methods.
arXiv Detail & Related papers (2020-08-10T14:22:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.