Blowing in the Wind: CycleNet for Human Cinemagraphs from Still Images
- URL: http://arxiv.org/abs/2303.08639v1
- Date: Wed, 15 Mar 2023 14:09:35 GMT
- Title: Blowing in the Wind: CycleNet for Human Cinemagraphs from Still Images
- Authors: Hugo Bertiche, Niloy J. Mitra, Kuldeep Kulkarni, Chun-Hao Paul Huang,
Tuanfeng Y. Wang, Meysam Madadi, Sergio Escalera and Duygu Ceylan
- Abstract summary: We present an automatic method that allows generating human cinemagraphs from single RGB images.
At the core of our method is a novel cyclic neural network that produces looping cinemagraphs for the target loop duration.
We evaluate our method on both synthetic and real data and demonstrate that it is possible to create compelling and plausible cinemagraphs from single RGB images.
- Score: 58.67263739579952
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cinemagraphs are short looping videos created by adding subtle motions to a
static image. This kind of media is popular and engaging. However, automatic
generation of cinemagraphs is an underexplored area and current solutions
require tedious low-level manual authoring by artists. In this paper, we
present an automatic method that allows generating human cinemagraphs from
single RGB images. We investigate the problem in the context of dressed humans
under the wind. At the core of our method is a novel cyclic neural network that
produces looping cinemagraphs for the target loop duration. To circumvent the
problem of collecting real data, we demonstrate that it is possible, by working
in the image normal space, to learn garment motion dynamics on synthetic data
and generalize to real data. We evaluate our method on both synthetic and real
data and demonstrate that it is possible to create compelling and plausible
cinemagraphs from single RGB images.
Related papers
- LoopGaussian: Creating 3D Cinemagraph with Multi-view Images via Eulerian Motion Field [13.815932949774858]
Cinemagraph is a form of visual media that combines elements of still photography and subtle motion to create a captivating experience.
We propose LoopGaussian to elevate cinemagraph from 2D image space to 3D space using 3D Gaussian modeling.
Experiment results validate the effectiveness of our approach, demonstrating high-quality and visually appealing scene generation.
arXiv Detail & Related papers (2024-04-13T11:07:53Z) - Online Detection of AI-Generated Images [17.30253784649635]
We study generalization in this setting, training on N models and testing on the next (N+k)
We extend this approach to pixel prediction, demonstrating strong performance using automatically-generated inpainted data.
In addition, for settings where commercial models are not publicly available for automatic data generation, we evaluate if pixel detectors can be trained solely on whole synthetic images.
arXiv Detail & Related papers (2023-10-23T17:53:14Z) - Generative Image Dynamics [80.70729090482575]
We present an approach to modeling an image-space prior on scene motion.
Our prior is learned from a collection of motion trajectories extracted from real video sequences.
arXiv Detail & Related papers (2023-09-14T17:54:01Z) - Text-Guided Synthesis of Eulerian Cinemagraphs [81.20353774053768]
We introduce Text2Cinemagraph, a fully automated method for creating cinemagraphs from text descriptions.
We focus on cinemagraphs of fluid elements, such as flowing rivers, and drifting clouds, which exhibit continuous motion and repetitive textures.
arXiv Detail & Related papers (2023-07-06T17:59:31Z) - Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur [68.24599239479326]
We develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images.
Our model surpasses state-of-the-art point-based methods for novel view synthesis.
arXiv Detail & Related papers (2023-04-25T08:36:33Z) - Endless Loops: Detecting and Animating Periodic Patterns in Still Images [6.589980988982727]
We present an algorithm for producing a seamless animated loop from a single image.
The algorithm detects periodic structures, such as the windows of a building or the steps of a staircase, and generates a non-trivial displacement vector field.
This displacement field is used, together with suitable temporal and spatial smoothing, to warp the image and produce the frames of a continuous animation loop.
arXiv Detail & Related papers (2021-05-19T19:39:58Z) - Animating Pictures with Eulerian Motion Fields [90.30598913855216]
We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
arXiv Detail & Related papers (2020-11-30T18:59:06Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.