PREF: Predictability Regularized Neural Motion Fields
- URL: http://arxiv.org/abs/2209.10691v2
- Date: Wed, 5 Apr 2023 19:26:09 GMT
- Title: PREF: Predictability Regularized Neural Motion Fields
- Authors: Liangchen Song, Xuan Gong, Benjamin Planche, Meng Zheng, David
Doermann, Junsong Yuan, Terrence Chen, Ziyan Wu
- Abstract summary: Knowing 3D motions in a dynamic scene is essential to many vision applications.
We leverage a neural motion field for estimating the motion of all points in a multiview setting.
We propose to regularize the estimated motion to be predictable.
- Score: 68.60019434498703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowing the 3D motions in a dynamic scene is essential to many vision
applications. Recent progress is mainly focused on estimating the activity of
some specific elements like humans. In this paper, we leverage a neural motion
field for estimating the motion of all points in a multiview setting. Modeling
the motion from a dynamic scene with multiview data is challenging due to the
ambiguities in points of similar color and points with time-varying color. We
propose to regularize the estimated motion to be predictable. If the motion
from previous frames is known, then the motion in the near future should be
predictable. Therefore, we introduce a predictability regularization by first
conditioning the estimated motion on latent embeddings, then by adopting a
predictor network to enforce predictability on the embeddings. The proposed
framework PREF (Predictability REgularized Fields) achieves on par or better
results than state-of-the-art neural motion field-based dynamic scene
representation methods, while requiring no prior knowledge of the scene.
Related papers
- Degrees of Freedom Matter: Inferring Dynamics from Point Trajectories [28.701879490459675]
We aim to learn an implicit motion field parameterized by a neural network to predict the movement of novel points within same domain.
We exploit intrinsic regularization provided by SIREN, and modify the input layer to produce atemporally smooth motion field.
Our experiments assess the model's performance in predicting unseen point trajectories and its application in temporal mesh alignment with deformation.
arXiv Detail & Related papers (2024-06-05T21:02:10Z) - DEMOS: Dynamic Environment Motion Synthesis in 3D Scenes via Local
Spherical-BEV Perception [54.02566476357383]
We propose the first Dynamic Environment MOtion Synthesis framework (DEMOS) to predict future motion instantly according to the current scene.
We then use it to dynamically update the latent motion for final motion synthesis.
The results show our method outperforms previous works significantly and has great performance in handling dynamic environments.
arXiv Detail & Related papers (2024-03-04T05:38:16Z) - HumanMAC: Masked Motion Completion for Human Motion Prediction [62.279925754717674]
Human motion prediction is a classical problem in computer vision and computer graphics.
Previous effects achieve great empirical performance based on an encoding-decoding style.
In this paper, we propose a novel framework from a new perspective.
arXiv Detail & Related papers (2023-02-07T18:34:59Z) - Investigating Pose Representations and Motion Contexts Modeling for 3D
Motion Prediction [63.62263239934777]
We conduct an indepth study on various pose representations with a focus on their effects on the motion prediction task.
We propose a novel RNN architecture termed AHMR (Attentive Hierarchical Motion Recurrent network) for motion prediction.
Our approach outperforms the state-of-the-art methods in short-term prediction and achieves much enhanced long-term prediction proficiency.
arXiv Detail & Related papers (2021-12-30T10:45:22Z) - Learning to Predict Diverse Human Motions from a Single Image via
Mixture Density Networks [9.06677862854201]
We propose a novel approach to predict future human motions from a single image, with mixture density networks (MDN) modeling.
Contrary to most existing deep human motion prediction approaches, the multimodal nature of MDN enables the generation of diverse future motion hypotheses.
Our trained model directly takes an image as input and generates multiple plausible motions that satisfy the given condition.
arXiv Detail & Related papers (2021-09-13T08:49:33Z) - Generating Smooth Pose Sequences for Diverse Human Motion Prediction [90.45823619796674]
We introduce a unified deep generative network for both diverse and controllable motion prediction.
Our experiments on two standard benchmark datasets, Human3.6M and HumanEva-I, demonstrate that our approach outperforms the state-of-the-art baselines in terms of both sample diversity and accuracy.
arXiv Detail & Related papers (2021-08-19T00:58:00Z) - Panoptic Segmentation Forecasting [71.75275164959953]
Our goal is to forecast the near future given a set of recent observations.
We think this ability to forecast, i.e., to anticipate, is integral for the success of autonomous agents.
We develop a two-component model: one component learns the dynamics of the background stuff by anticipating odometry, the other one anticipates the dynamics of detected things.
arXiv Detail & Related papers (2021-04-08T17:59:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.