Layered Neural Rendering for Retiming People in Video
- URL: http://arxiv.org/abs/2009.07833v2
- Date: Fri, 1 Oct 2021 01:15:41 GMT
- Title: Layered Neural Rendering for Retiming People in Video
- Authors: Erika Lu, Forrester Cole, Tali Dekel, Weidi Xie, Andrew Zisserman,
David Salesin, William T. Freeman, Michael Rubinstein
- Abstract summary: We present a method for retiming people in an ordinary, natural video.
We can temporally align different motions, change the speed of certain actions, or "erase" selected people from the video altogether.
A key property of our model is that it not only disentangles the direct motions of each person in the input video, but also correlates each person automatically with the scene changes they generate.
- Score: 108.85428504808318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method for retiming people in an ordinary, natural video --
manipulating and editing the time in which different motions of individuals in
the video occur. We can temporally align different motions, change the speed of
certain actions (speeding up/slowing down, or entirely "freezing" people), or
"erase" selected people from the video altogether. We achieve these effects
computationally via a dedicated learning-based layered video representation,
where each frame in the video is decomposed into separate RGBA layers,
representing the appearance of different people in the video. A key property of
our model is that it not only disentangles the direct motions of each person in
the input video, but also correlates each person automatically with the scene
changes they generate -- e.g., shadows, reflections, and motion of loose
clothing. The layers can be individually retimed and recombined into a new
video, allowing us to achieve realistic, high-quality renderings of retiming
effects for real-world videos depicting complex actions and involving multiple
individuals, including dancing, trampoline jumping, or group running.
Related papers
- DeCo: Decoupled Human-Centered Diffusion Video Editing with Motion Consistency [66.49423641279374]
We introduce DeCo, a novel video editing framework specifically designed to treat humans and the background as separate editable targets.
We propose a decoupled dynamic human representation that utilizes a human body prior to generate tailored humans.
We extend the calculation of score distillation sampling into normal space and image space to enhance the texture of humans during the optimization.
arXiv Detail & Related papers (2024-08-14T11:53:40Z) - MotionDirector: Motion Customization of Text-to-Video Diffusion Models [24.282240656366714]
Motion Customization aims to adapt existing text-to-video diffusion models to generate videos with customized motion.
We propose MotionDirector, with a dual-path LoRAs architecture to decouple the learning of appearance and motion.
Our method also supports various downstream applications, such as the mixing of different videos with their appearance and motion respectively, and animating a single image with customized motions.
arXiv Detail & Related papers (2023-10-12T16:26:18Z) - Hashing Neural Video Decomposition with Multiplicative Residuals in
Space-Time [14.015909536844337]
We present a video decomposition method that facilitates layer-based editing of videos withtemporally varying lighting effects.
Our method efficiently learns layer-based neural representations of a 1080p video in 25s per frame via coordinate hashing.
We propose to adopt evaluation metrics for objectively assessing the consistency of video editing.
arXiv Detail & Related papers (2023-09-25T10:36:14Z) - Copy Motion From One to Another: Fake Motion Video Generation [53.676020148034034]
A compelling application of artificial intelligence is to generate a video of a target person performing arbitrary desired motion.
Current methods typically employ GANs with a L2 loss to assess the authenticity of the generated videos.
We propose a theoretically motivated Gromov-Wasserstein loss that facilitates learning the mapping from a pose to a foreground image.
Our method is able to generate realistic target person videos, faithfully copying complex motions from a source person.
arXiv Detail & Related papers (2022-05-03T08:45:22Z) - Unsupervised Video Interpolation by Learning Multilayered 2.5D Motion
Fields [75.81417944207806]
This paper presents a self-supervised approach to video frame learning that requires only a single video.
We parameterize the video motions by solving an ordinary differentiable equation (ODE) defined on a time-varying motion field.
This implicit neural representation learns the video as a space-time continuum, allowing frame-time continuum at any temporal resolution.
arXiv Detail & Related papers (2022-04-21T06:17:05Z) - Layered Neural Atlases for Consistent Video Editing [37.69447642502351]
We present a method that decomposes, or "unwraps", an input video into a set of layered 2D atlases.
For each pixel in the video, our method estimates its corresponding 2D coordinate in each of the atlases.
We design our atlases to be interpretable and semantic, which facilitates easy and intuitive editing in the atlas domain.
arXiv Detail & Related papers (2021-09-23T14:58:59Z) - Omnimatte: Associating Objects and Their Effects in Video [100.66205249649131]
Scene effects related to objects in video are typically overlooked by computer vision.
In this work, we take a step towards solving this novel problem of automatically associating objects with their effects in video.
Our model is trained only on the input video in a self-supervised manner, without any manual labels, and is generic---it produces omnimattes automatically for arbitrary objects and a variety of effects.
arXiv Detail & Related papers (2021-05-14T17:57:08Z) - First Order Motion Model for Image Animation [90.712718329677]
Image animation consists of generating a video sequence so that an object in a source image is animated according to the motion of a driving video.
Our framework addresses this problem without using any annotation or prior information about the specific object to animate.
arXiv Detail & Related papers (2020-02-29T07:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.