Sketch Video Synthesis
- URL: http://arxiv.org/abs/2311.15306v1
- Date: Sun, 26 Nov 2023 14:14:04 GMT
- Title: Sketch Video Synthesis
- Authors: Yudian Zheng, Xiaodong Cun, Menghan Xia, Chi-Man Pun
- Abstract summary: We propose a novel framework for sketching videos represented by the frame-wise B'ezier curve.
Our method unlocks applications in sketch-based video editing and video doodling, enabled through video composition.
- Score: 52.134906766625164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding semantic intricacies and high-level concepts is essential in
image sketch generation, and this challenge becomes even more formidable when
applied to the domain of videos. To address this, we propose a novel
optimization-based framework for sketching videos represented by the frame-wise
B\'ezier curve. In detail, we first propose a cross-frame stroke initialization
approach to warm up the location and the width of each curve. Then, we optimize
the locations of these curves by utilizing a semantic loss based on CLIP
features and a newly designed consistency loss using the self-decomposed 2D
atlas network. Built upon these design elements, the resulting sketch video
showcases impressive visual abstraction and temporal coherence. Furthermore, by
transforming a video into SVG lines through the sketching process, our method
unlocks applications in sketch-based video editing and video doodling, enabled
through video composition, as exemplified in the teaser.
Related papers
- Sketch3D: Style-Consistent Guidance for Sketch-to-3D Generation [55.73399465968594]
This paper proposes a novel generation paradigm Sketch3D to generate realistic 3D assets with shape aligned with the input sketch and color matching the textual description.
Three strategies are designed to optimize 3D Gaussians, i.e., structural optimization via a distribution transfer mechanism, color optimization with a straightforward MSE loss and sketch similarity optimization with a CLIP-based geometric similarity loss.
arXiv Detail & Related papers (2024-04-02T11:03:24Z) - CustomSketching: Sketch Concept Extraction for Sketch-based Image
Synthesis and Editing [21.12815542848095]
Personalization techniques for large text-to-image (T2I) models allow users to incorporate new concepts from reference images.
Existing methods primarily rely on textual descriptions, leading to limited control over customized images.
We identify sketches as an intuitive and versatile representation that can facilitate such control.
arXiv Detail & Related papers (2024-02-27T15:52:59Z) - Doodle Your 3D: From Abstract Freehand Sketches to Precise 3D Shapes [118.406721663244]
We introduce a novel part-level modelling and alignment framework that facilitates abstraction modelling and cross-modal correspondence.
Our approach seamlessly extends to sketch modelling by establishing correspondence between CLIPasso edgemaps and projected 3D part regions.
arXiv Detail & Related papers (2023-12-07T05:04:33Z) - Bridging the Gap: Sketch-Aware Interpolation Network for High-Quality Animation Sketch Inbetweening [58.09847349781176]
We propose a novel deep learning method - Sketch-Aware Interpolation Network (SAIN)
This approach incorporates multi-level guidance that formulates region-level correspondence, stroke-level correspondence and pixel-level dynamics.
A multi-stream U-Transformer is then devised to characterize sketch inbetweening patterns using these multi-level guides through the integration of self / cross-attention mechanisms.
arXiv Detail & Related papers (2023-08-25T09:51:03Z) - SENS: Part-Aware Sketch-based Implicit Neural Shape Modeling [124.3266213819203]
We present SENS, a novel method for generating and editing 3D models from hand-drawn sketches.
S SENS analyzes the sketch and encodes its parts into ViT patch encoding.
S SENS supports refinement via part reconstruction, allowing for nuanced adjustments and artifact removal.
arXiv Detail & Related papers (2023-06-09T17:50:53Z) - FS-COCO: Towards Understanding of Freehand Sketches of Common Objects in
Context [112.07988211268612]
We advance sketch research to scenes with the first dataset of freehand scene sketches, FS-COCO.
Our dataset comprises 10,000 freehand scene vector sketches with per point space-time information by 100 non-expert individuals.
We study for the first time the problem of the fine-grained image retrieval from freehand scene sketches and sketch captions.
arXiv Detail & Related papers (2022-03-04T03:00:51Z) - Sketch Me A Video [32.38205496481408]
We introduce a new video synthesis task by employing two rough bad-drwan sketches only as input to create a realistic portrait video.
A two-stage Sketch-to-Video model is proposed, which consists of two key novelties.
arXiv Detail & Related papers (2021-10-10T05:40:11Z) - Deep Sketch-guided Cartoon Video Inbetweening [24.00033622396297]
We propose a framework to produce cartoon videos by fetching the color information from two inputs while following the animated motion guided by a user sketch.
By explicitly considering the correspondence between frames and the sketch, we can achieve higher quality results than other image synthesis methods.
arXiv Detail & Related papers (2020-08-10T14:22:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.