MotionEdit: Benchmarking and Learning Motion-Centric Image Editing
- URL: http://arxiv.org/abs/2512.10284v2
- Date: Sun, 14 Dec 2025 01:57:25 GMT
- Title: MotionEdit: Benchmarking and Learning Motion-Centric Image Editing
- Authors: Yixin Wan, Lei Ke, Wenhao Yu, Kai-Wei Chang, Dong Yu,
- Abstract summary: We introduce MotionEdit, a novel dataset for motion-centric image editing.<n>MotionEdit provides high-fidelity image pairs depicting realistic motion transformations extracted from continuous videos.<n>We propose MotionNFT to compute motion alignment rewards based on how well the motion flow between input and model-edited images matches the ground-truth motion.
- Score: 81.28392925790568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce MotionEdit, a novel dataset for motion-centric image editing-the task of modifying subject actions and interactions while preserving identity, structure, and physical plausibility. Unlike existing image editing datasets that focus on static appearance changes or contain only sparse, low-quality motion edits, MotionEdit provides high-fidelity image pairs depicting realistic motion transformations extracted and verified from continuous videos. This new task is not only scientifically challenging but also practically significant, powering downstream applications such as frame-controlled video synthesis and animation. To evaluate model performance on the novel task, we introduce MotionEdit-Bench, a benchmark that challenges models on motion-centric edits and measures model performance with generative, discriminative, and preference-based metrics. Benchmark results reveal that motion editing remains highly challenging for existing state-of-the-art diffusion-based editing models. To address this gap, we propose MotionNFT (Motion-guided Negative-aware Fine Tuning), a post-training framework that computes motion alignment rewards based on how well the motion flow between input and model-edited images matches the ground-truth motion, guiding models toward accurate motion transformations. Extensive experiments on FLUX.1 Kontext and Qwen-Image-Edit show that MotionNFT consistently improves editing quality and motion fidelity of both base models on the motion editing task without sacrificing general editing ability, demonstrating its effectiveness. Our code is at https://github.com/elainew728/motion-edit/.
Related papers
- MotionV2V: Editing Motion in a Video [53.791975554391534]
We propose modifying video motion by editing sparse trajectories extracted from the input.<n>We term the deviation between input and output trajectories a "motion edit"<n>Our approach allows for edits that start at any timestamp and propagate naturally.
arXiv Detail & Related papers (2025-11-25T18:57:25Z) - SimMotionEdit: Text-Based Human Motion Editing with Motion Similarity Prediction [20.89960239295474]
We introduce a related task, motion similarity prediction, and propose a multi-task training paradigm.<n>We train the model jointly on motion editing and motion similarity prediction to foster the learning of semantically meaningful representations.<n>Experiments demonstrate the state-of-the-art performance of our approach in both editing alignment and fidelity.
arXiv Detail & Related papers (2025-03-23T21:29:37Z) - Edit as You See: Image-guided Video Editing via Masked Motion Modeling [18.89936405508778]
We propose a novel Image-guided Video Editing Diffusion model, termed IVEDiff.<n>IVEDiff is built on top of image editing models, and is equipped with learnable motion modules to maintain the temporal consistency of edited video.<n>Our method is able to generate temporally smooth edited videos while robustly dealing with various editing objects with high quality.
arXiv Detail & Related papers (2025-01-08T07:52:12Z) - A Plug-and-Play Physical Motion Restoration Approach for In-the-Wild High-Difficulty Motions [56.709280823844374]
We introduce a mask-based motion correction module (MCM) that leverages motion context and video mask to repair flawed motions.<n>We also propose a physics-based motion transfer module (PTM), which employs a pretrain and adapt approach for motion imitation.<n>Our approach is designed as a plug-and-play module to physically refine the video motion capture results, including high-difficulty in-the-wild motions.
arXiv Detail & Related papers (2024-12-23T08:26:00Z) - MotionFix: Text-Driven 3D Human Motion Editing [52.11745508960547]
Key challenges include the scarcity of training data and the need to design a model that accurately edits the source motion.
We propose a methodology to semi-automatically collect a dataset of triplets comprising (i) a source motion, (ii) a target motion, and (iii) an edit text.
Access to this data allows us to train a conditional diffusion model, TMED, that takes both the source motion and the edit text as input.
arXiv Detail & Related papers (2024-08-01T16:58:50Z) - MotionFollower: Editing Video Motion via Lightweight Score-Guided Diffusion [94.66090422753126]
MotionFollower is a lightweight score-guided diffusion model for video motion editing.
It delivers superior motion editing performance and exclusively supports large camera movements and actions.
Compared with MotionEditor, the most advanced motion editing model, MotionFollower achieves an approximately 80% reduction in GPU memory.
arXiv Detail & Related papers (2024-05-30T17:57:30Z) - Motion Flow Matching for Human Motion Synthesis and Editing [75.13665467944314]
We propose emphMotion Flow Matching, a novel generative model for human motion generation featuring efficient sampling and effectiveness in motion editing applications.
Our method reduces the sampling complexity from thousand steps in previous diffusion models to just ten steps, while achieving comparable performance in text-to-motion and action-to-motion generation benchmarks.
arXiv Detail & Related papers (2023-12-14T12:57:35Z) - MotionEditor: Editing Video Motion via Content-Aware Diffusion [96.825431998349]
MotionEditor is a diffusion model for video motion editing.
It incorporates a novel content-aware motion adapter into ControlNet to capture temporal motion correspondence.
arXiv Detail & Related papers (2023-11-30T18:59:33Z) - Motion-Conditioned Image Animation for Video Editing [65.90398261600964]
MoCA is a Motion-Conditioned Image Animation approach for video editing.
We present a comprehensive human evaluation of the latest video editing methods along with MoCA, on our proposed benchmark.
MoCA establishes a new state-of-the-art, demonstrating greater human preference win-rate, and outperforming notable recent approaches.
arXiv Detail & Related papers (2023-11-30T18:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.