Unsupervised Video Prediction from a Single Frame by Estimating 3D
Dynamic Scene Structure
- URL: http://arxiv.org/abs/2106.09051v1
- Date: Wed, 16 Jun 2021 18:00:12 GMT
- Title: Unsupervised Video Prediction from a Single Frame by Estimating 3D
Dynamic Scene Structure
- Authors: Paul Henderson, Christoph H. Lampert, Bernd Bickel
- Abstract summary: We develop a model that first estimates the latent 3D structure of the scene, including the segmentation of any moving objects.
It then predicts future frames by simulating the object and camera dynamics, and rendering the resulting views.
Experiments on two challenging datasets of natural videos show that our model can estimate 3D structure and motion segmentation from a single frame.
- Score: 42.3091008598491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our goal in this work is to generate realistic videos given just one initial
frame as input. Existing unsupervised approaches to this task do not consider
the fact that a video typically shows a 3D environment, and that this should
remain coherent from frame to frame even as the camera and objects move. We
address this by developing a model that first estimates the latent 3D structure
of the scene, including the segmentation of any moving objects. It then
predicts future frames by simulating the object and camera dynamics, and
rendering the resulting views. Importantly, it is trained end-to-end using only
the unsupervised objective of predicting future frames, without any 3D
information nor segmentation annotations. Experiments on two challenging
datasets of natural videos show that our model can estimate 3D structure and
motion segmentation from a single frame, and hence generate plausible and
varied predictions.
Related papers
- Tracking by 3D Model Estimation of Unknown Objects in Videos [122.56499878291916]
We argue that this representation is limited and instead propose to guide and improve 2D tracking with an explicit object representation.
Our representation tackles a complex long-term dense correspondence problem between all 3D points on the object for all video frames.
The proposed optimization minimizes a novel loss function to estimate the best 3D shape, texture, and 6DoF pose.
arXiv Detail & Related papers (2023-04-13T11:32:36Z) - Temporal View Synthesis of Dynamic Scenes through 3D Object Motion
Estimation with Multi-Plane Images [8.185918509343816]
We study the problem of temporal view synthesis (TVS), where the goal is to predict the next frames of a video.
In this work, we consider the TVS of dynamic scenes in which both the user and objects are moving.
We predict the motion of objects by isolating and estimating the 3D object motion in the past frames and then extrapolating it.
arXiv Detail & Related papers (2022-08-19T17:40:13Z) - NeuralDiff: Segmenting 3D objects that move in egocentric videos [92.95176458079047]
We study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground.
This task is reminiscent of the classic background subtraction problem, but is significantly harder because all parts of the scene, static and dynamic, generate a large apparent motion.
In particular, we consider egocentric videos and further separate the dynamic component into objects and the actor that observes and moves them.
arXiv Detail & Related papers (2021-10-19T12:51:35Z) - Video Autoencoder: self-supervised disentanglement of static 3D
structure and motion [60.58836145375273]
A video autoencoder is proposed for learning disentan- gled representations of 3D structure and camera pose from videos.
The representation can be applied to a range of tasks, including novel view synthesis, camera pose estimation, and video generation by motion following.
arXiv Detail & Related papers (2021-10-06T17:57:42Z) - Online Adaptation for Consistent Mesh Reconstruction in the Wild [147.22708151409765]
We pose video-based reconstruction as a self-supervised online adaptation problem applied to any incoming test video.
We demonstrate that our algorithm recovers temporally consistent and reliable 3D structures from videos of non-rigid objects including those of animals captured in the wild.
arXiv Detail & Related papers (2020-12-06T07:22:27Z) - 3D-OES: Viewpoint-Invariant Object-Factorized Environment Simulators [24.181604511269096]
We propose an action-conditioned dynamics model that predicts scene changes caused by object and agent interactions in a viewpoint-in 3D neural scene representation space.
In this space, objects do not interfere with one another and their appearance persists over time and across viewpoints.
We show our model generalizes well its predictions across varying number and appearances of interacting objects as well as across camera viewpoints.
arXiv Detail & Related papers (2020-11-12T16:15:52Z) - Unsupervised object-centric video generation and decomposition in 3D [36.08064849807464]
We propose to model a video as the view seen while moving through a scene with multiple 3D objects and a 3D background.
Our model is trained from monocular videos without any supervision, yet learns to generate coherent 3D scenes containing several moving objects.
arXiv Detail & Related papers (2020-07-07T18:01:29Z) - Occlusion resistant learning of intuitive physics from videos [52.25308231683798]
Key ability for artificial systems is to understand physical interactions between objects, and predict future outcomes of a situation.
This ability, often referred to as intuitive physics, has recently received attention and several methods were proposed to learn these physical rules from video sequences.
arXiv Detail & Related papers (2020-04-30T19:35:54Z) - Motion Segmentation using Frequency Domain Transformer Networks [29.998917158604694]
We propose a novel end-to-end learnable architecture that predicts the next frame by modeling foreground and background separately.
Our approach can outperform some widely used video prediction methods like Video Ladder Network and Predictive Gated Pyramids on synthetic data.
arXiv Detail & Related papers (2020-04-18T15:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.