3D Cinemagraphy from a Single Image
- URL: http://arxiv.org/abs/2303.05724v1
- Date: Fri, 10 Mar 2023 06:08:23 GMT
- Title: 3D Cinemagraphy from a Single Image
- Authors: Xingyi Li, Zhiguo Cao, Huiqiang Sun, Jianming Zhang, Ke Xian, Guosheng
Lin
- Abstract summary: We present 3D Cinemagraphy, a new technique that marries 2D image animation with 3D photography.
Given a single still image as input, our goal is to generate a video that contains both visual content animation and camera motion.
- Score: 73.09720823592092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present 3D Cinemagraphy, a new technique that marries 2D image animation
with 3D photography. Given a single still image as input, our goal is to
generate a video that contains both visual content animation and camera motion.
We empirically find that naively combining existing 2D image animation and 3D
photography methods leads to obvious artifacts or inconsistent animation. Our
key insight is that representing and animating the scene in 3D space offers a
natural solution to this task. To this end, we first convert the input image
into feature-based layered depth images using predicted depth values, followed
by unprojecting them to a feature point cloud. To animate the scene, we perform
motion estimation and lift the 2D motion into the 3D scene flow. Finally, to
resolve the problem of hole emergence as points move forward, we propose to
bidirectionally displace the point cloud as per the scene flow and synthesize
novel views by separately projecting them into target image planes and blending
the results. Extensive experiments demonstrate the effectiveness of our method.
A user study is also conducted to validate the compelling rendering results of
our method.
Related papers
- Sketch2Scene: Automatic Generation of Interactive 3D Game Scenes from User's Casual Sketches [50.51643519253066]
3D Content Generation is at the heart of many computer graphics applications, including video gaming, film-making, virtual and augmented reality, etc.
This paper proposes a novel deep-learning based approach for automatically generating interactive and playable 3D game scenes.
arXiv Detail & Related papers (2024-08-08T16:27:37Z) - LoopGaussian: Creating 3D Cinemagraph with Multi-view Images via Eulerian Motion Field [13.815932949774858]
Cinemagraph is a form of visual media that combines elements of still photography and subtle motion to create a captivating experience.
We propose LoopGaussian to elevate cinemagraph from 2D image space to 3D space using 3D Gaussian modeling.
Experiment results validate the effectiveness of our approach, demonstrating high-quality and visually appealing scene generation.
arXiv Detail & Related papers (2024-04-13T11:07:53Z) - Synthesizing Moving People with 3D Control [88.68284137105654]
We present a diffusion model-based framework for animating people from a single image for a given target 3D motion sequence.
For the first part, we learn an in-filling diffusion model to hallucinate unseen parts of a person given a single image.
Second, we develop a diffusion-based rendering pipeline, which is controlled by 3D human poses.
arXiv Detail & Related papers (2024-01-19T18:59:11Z) - Make-It-4D: Synthesizing a Consistent Long-Term Dynamic Scene Video from
a Single Image [59.18564636990079]
We study the problem of synthesizing a long-term dynamic video from only a single image.
Existing methods either hallucinate inconsistent perpetual views or struggle with long camera trajectories.
We present Make-It-4D, a novel method that can generate a consistent long-term dynamic video from a single image.
arXiv Detail & Related papers (2023-08-20T12:53:50Z) - Unsupervised Volumetric Animation [54.52012366520807]
We propose a novel approach for unsupervised 3D animation of non-rigid deformable objects.
Our method learns the 3D structure and dynamics of objects solely from single-view RGB videos.
We show our model can obtain animatable 3D objects from a single volume or few images.
arXiv Detail & Related papers (2023-01-26T18:58:54Z) - 3D Moments from Near-Duplicate Photos [67.15199743223332]
3D Moments is a new computational photography effect.
We produce a video that smoothly interpolates the scene motion from the first photo to the second.
Our system produces photorealistic space-time videos with motion parallax and scene dynamics.
arXiv Detail & Related papers (2022-05-12T17:56:18Z) - Unsupervised object-centric video generation and decomposition in 3D [36.08064849807464]
We propose to model a video as the view seen while moving through a scene with multiple 3D objects and a 3D background.
Our model is trained from monocular videos without any supervision, yet learns to generate coherent 3D scenes containing several moving objects.
arXiv Detail & Related papers (2020-07-07T18:01:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.