PanFlow: Decoupled Motion Control for Panoramic Video Generation
- URL: http://arxiv.org/abs/2512.00832v1
- Date: Sun, 30 Nov 2025 11:03:31 GMT
- Title: PanFlow: Decoupled Motion Control for Panoramic Video Generation
- Authors: Cheng Zhang, Hanwen Liang, Donny Y. Chen, Qianyi Wu, Konstantinos N. Plataniotis, Camilo Cruz Gambardella, Jianfei Cai,
- Abstract summary: PanFlow is a novel approach that exploits the spherical nature of panoramas to decouple the highly dynamic camera rotation from the input optical flow condition.<n>To support effective training, we curate a large-scale, motion-rich panoramic video dataset with frame-level pose and flow annotations.
- Score: 52.47902086091194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Panoramic video generation has attracted growing attention due to its applications in virtual reality and immersive media. However, existing methods lack explicit motion control and struggle to generate scenes with large and complex motions. We propose PanFlow, a novel approach that exploits the spherical nature of panoramas to decouple the highly dynamic camera rotation from the input optical flow condition, enabling more precise control over large and dynamic motions. We further introduce a spherical noise warping strategy to promote loop consistency in motion across panorama boundaries. To support effective training, we curate a large-scale, motion-rich panoramic video dataset with frame-level pose and flow annotations. We also showcase the effectiveness of our method in various applications, including motion transfer and video editing. Extensive experiments demonstrate that PanFlow significantly outperforms prior methods in motion fidelity, visual quality, and temporal coherence. Our code, dataset, and models are available at https://github.com/chengzhag/PanFlow.
Related papers
- Wan-Move: Motion-controllable Video Generation via Latent Trajectory Guidance [107.25252623824296]
Wan-Move is a framework that brings motion control to video generative models.<n>Our core idea is to make the original condition features motion-aware for guiding video.<n>Through scaled training, Wan-Move generates 5-second, 480p videos whose motion controllability rivals Kling 1.5's commercial Motion Brush.
arXiv Detail & Related papers (2025-12-09T16:13:55Z) - ATI: Any Trajectory Instruction for Controllable Video Generation [25.249489701215467]
We propose a unified framework for motion control in video generation that seamlessly integrates camera movement, object-level translation, and fine-grained local motion.<n>Our approach offers a cohesive solution by projecting user-defined trajectories into the latent space of pre-trained image-to-video generation models.
arXiv Detail & Related papers (2025-05-28T23:49:18Z) - MotionPro: A Precise Motion Controller for Image-to-Video Generation [108.63100943070592]
We present MotionPro, a precise motion controller for image-to-video (I2V) generation.<n>Region-wise trajectory and motion mask are used to regulate fine-grained motion synthesis.<n>Experiments conducted on WebVid-10M and MC-Bench demonstrate the effectiveness of MotionPro.
arXiv Detail & Related papers (2025-05-26T17:59:03Z) - MotionAgent: Fine-grained Controllable Video Generation via Motion Field Agent [55.15697390165972]
We propose MotionAgent, enabling fine-grained motion control for text-guided image-to-video generation.<n>The key technique is the motion field agent that converts motion information in text prompts into explicit motion fields.<n>We construct a subset of VBench to evaluate the alignment of motion information in the text and the generated video, outperforming other advanced models on motion generation accuracy.
arXiv Detail & Related papers (2025-02-05T14:26:07Z) - MotionFlow: Attention-Driven Motion Transfer in Video Diffusion Models [3.2311303453753033]
We introduce MotionFlow, a novel framework designed for motion transfer in video diffusion models.<n>Our method utilizes cross-attention maps to accurately capture and manipulate spatial and temporal dynamics.<n>Our experiments demonstrate that MotionFlow significantly outperforms existing models in both fidelity and versatility even during drastic scene alterations.
arXiv Detail & Related papers (2024-12-06T18:59:12Z) - Motion Prompting: Controlling Video Generation with Motion Trajectories [57.049252242807874]
We train a video generation model conditioned on sparse or dense video trajectories.<n>We translate high-level user requests into detailed, semi-dense motion prompts.<n>We demonstrate our approach through various applications, including camera and object motion control, "interacting" with an image, motion transfer, and image editing.
arXiv Detail & Related papers (2024-12-03T18:59:56Z) - AnimateAnything: Consistent and Controllable Animation for Video Generation [24.576022028967195]
We present a unified controllable video generation approach AnimateAnything.
It facilitates precise and consistent video manipulation across various conditions.
Experiments demonstrate that our method outperforms the state-of-the-art approaches.
arXiv Detail & Related papers (2024-11-16T16:36:49Z) - Puppet-Master: Scaling Interactive Video Generation as a Motion Prior for Part-Level Dynamics [79.4785166021062]
We introduce Puppet-Master, an interactive video generator that captures the internal, part-level motion of objects.<n>We demonstrate that Puppet-Master learns to generate part-level motions, unlike other motion-conditioned video generators.<n>Puppet-Master generalizes well to out-of-domain real images, outperforming existing methods on real-world benchmarks.
arXiv Detail & Related papers (2024-08-08T17:59:38Z) - Animating Pictures with Eulerian Motion Fields [90.30598913855216]
We show a fully automatic method for converting a still image into a realistic animated looping video.
We target scenes with continuous fluid motion, such as flowing water and billowing smoke.
We propose a novel video looping technique that flows features both forward and backward in time and then blends the results.
arXiv Detail & Related papers (2020-11-30T18:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.