AnimeRun: 2D Animation Visual Correspondence from Open Source 3D Movies
- URL: http://arxiv.org/abs/2211.05709v1
- Date: Thu, 10 Nov 2022 17:26:21 GMT
- Title: AnimeRun: 2D Animation Visual Correspondence from Open Source 3D Movies
- Authors: Li Siyao, Yuhang Li, Bo Li, Chao Dong, Ziwei Liu, Chen Change Loy
- Abstract summary: Existing datasets for two-dimensional (2D) cartoon suffer from simple frame composition and monotonic movements.
We present a new 2D animation visual correspondence dataset, AnimeRun, by converting open source 3D movies to full scenes in 2D style.
Our analyses show that the proposed dataset not only resembles real anime more in image composition, but also possesses richer and more complex motion patterns compared to existing datasets.
- Score: 98.65469430034246
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Existing correspondence datasets for two-dimensional (2D) cartoon suffer from
simple frame composition and monotonic movements, making them insufficient to
simulate real animations. In this work, we present a new 2D animation visual
correspondence dataset, AnimeRun, by converting open source three-dimensional
(3D) movies to full scenes in 2D style, including simultaneous moving
background and interactions of multiple subjects. Our analyses show that the
proposed dataset not only resembles real anime more in image composition, but
also possesses richer and more complex motion patterns compared to existing
datasets. With this dataset, we establish a comprehensive benchmark by
evaluating several existing optical flow and segment matching methods, and
analyze shortcomings of these methods on animation data. Data, code and other
supplementary materials are available at
https://lisiyao21.github.io/projects/AnimeRun.
Related papers
- MMHead: Towards Fine-grained Multi-modal 3D Facial Animation [68.04052669266174]
We construct a large-scale multi-modal 3D facial animation dataset, MMHead.
MMHead consists of 49 hours of 3D facial motion sequences, speech audios, and rich hierarchical text annotations.
Based on the MMHead dataset, we establish benchmarks for two new tasks: text-induced 3D talking head animation and text-to-3D facial motion generation.
arXiv Detail & Related papers (2024-10-10T09:37:01Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - 3D Cinemagraphy from a Single Image [73.09720823592092]
We present 3D Cinemagraphy, a new technique that marries 2D image animation with 3D photography.
Given a single still image as input, our goal is to generate a video that contains both visual content animation and camera motion.
arXiv Detail & Related papers (2023-03-10T06:08:23Z) - Unsupervised Volumetric Animation [54.52012366520807]
We propose a novel approach for unsupervised 3D animation of non-rigid deformable objects.
Our method learns the 3D structure and dynamics of objects solely from single-view RGB videos.
We show our model can obtain animatable 3D objects from a single volume or few images.
arXiv Detail & Related papers (2023-01-26T18:58:54Z) - SketchBetween: Video-to-Video Synthesis for Sprite Animation via
Sketches [0.9645196221785693]
2D animation is a common factor in game development, used for characters, effects and background art.
Automated animation approaches exist, but are designed without animators in mind.
We propose a problem formulation that adheres more closely to the standard workflow of animation.
arXiv Detail & Related papers (2022-09-01T02:43:19Z) - AnimeCeleb: Large-Scale Animation CelebFaces Dataset via Controllable 3D
Synthetic Models [19.6347170450874]
We present a large-scale animation celebfaces dataset (AnimeCeleb) via controllable synthetic animation models.
To facilitate the data generation process, we build a semi-automatic pipeline based on an open 3D software.
This leads to constructing a large-scale animation face dataset that includes multi-pose and multi-style animation faces with rich annotations.
arXiv Detail & Related papers (2021-11-15T10:00:06Z) - Deep Animation Video Interpolation in the Wild [115.24454577119432]
In this work, we formally define and study the animation video code problem for the first time.
We propose an effective framework, AnimeInterp, with two dedicated modules in a coarse-to-fine manner.
Notably, AnimeInterp shows favorable perceptual quality and robustness for animation scenarios in the wild.
arXiv Detail & Related papers (2021-04-06T13:26:49Z) - Going beyond Free Viewpoint: Creating Animatable Volumetric Video of
Human Performances [7.7824496657259665]
We present an end-to-end pipeline for the creation of high-quality animatable volumetric video content of human performances.
Semantic enrichment and geometric animation ability are achieved by establishing temporal consistency in the 3D data.
For pose editing, we exploit the captured data as much as possible and kinematically deform the captured frames to fit a desired pose.
arXiv Detail & Related papers (2020-09-02T09:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.