Neutral Editing Framework for Diffusion-based Video Editing
- URL: http://arxiv.org/abs/2312.06708v1
- Date: Sun, 10 Dec 2023 16:28:32 GMT
- Title: Neutral Editing Framework for Diffusion-based Video Editing
- Authors: Sunjae Yoon, Gwanhyeong Koo, Ji Woo Hong, Chang D. Yoo
- Abstract summary: This paper proposes Neutral Editing (NeuEdit) framework to enable complex non-rigid editing.
NeuEdit introduces a concept of neutralization' that enhances a tuning-editing process of diffusion-based editing systems.
Experiments on numerous videos demonstrate adaptability and effectiveness of the NeuEdit framework.
- Score: 24.370584544151424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-conditioned image editing has succeeded in various types of editing
based on a diffusion framework. Unfortunately, this success did not carry over
to a video, which continues to be challenging. Existing video editing systems
are still limited to rigid-type editing such as style transfer and object
overlay. To this end, this paper proposes Neutral Editing (NeuEdit) framework
to enable complex non-rigid editing by changing the motion of a person/object
in a video, which has never been attempted before. NeuEdit introduces a concept
of `neutralization' that enhances a tuning-editing process of diffusion-based
editing systems in a model-agnostic manner by leveraging input video and text
without any other auxiliary aids (e.g., visual masks, video captions).
Extensive experiments on numerous videos demonstrate adaptability and
effectiveness of the NeuEdit framework. The website of our work is available
here: https://neuedit.github.io
Related papers
- GenVideo: One-shot Target-image and Shape Aware Video Editing using T2I Diffusion Models [2.362412515574206]
We propose "GenVideo" for editing videos leveraging target-image aware T2I models.
Our approach handles edits with target objects of varying shapes and sizes while maintaining the temporal consistency of the edit.
arXiv Detail & Related papers (2024-04-18T23:25:27Z) - UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing [28.140945021777878]
We present UniEdit, a tuning-free framework that supports both video motion and appearance editing.
To realize motion editing while preserving source video content, we introduce auxiliary motion-reference and reconstruction branches.
The obtained features are then injected into the main editing path via temporal and spatial self-attention layers.
arXiv Detail & Related papers (2024-02-20T17:52:12Z) - DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image
Editing [66.43179841884098]
Large-scale Text-to-Image (T2I) diffusion models have revolutionized image generation over the last few years.
We propose DiffEditor to rectify two weaknesses in existing diffusion-based image editing.
Our method can efficiently achieve state-of-the-art performance on various fine-grained image editing tasks.
arXiv Detail & Related papers (2024-02-04T18:50:29Z) - DiffusionAtlas: High-Fidelity Consistent Diffusion Video Editing [27.014978053413788]
We present a diffusion-based video editing framework, DiffusionAtlas, which can achieve both frame consistency and high fidelity in object appearance.
Our method leverages a visual-temporal diffusion model to edit objects directly on the diffusion atlases, ensuring coherent object identity across frames.
arXiv Detail & Related papers (2023-12-05T23:40:30Z) - MotionEditor: Editing Video Motion via Content-Aware Diffusion [96.825431998349]
MotionEditor is a diffusion model for video motion editing.
It incorporates a novel content-aware motion adapter into ControlNet to capture temporal motion correspondence.
arXiv Detail & Related papers (2023-11-30T18:59:33Z) - Editing 3D Scenes via Text Prompts without Retraining [80.57814031701744]
DN2N is a text-driven editing method that allows for the direct acquisition of a NeRF model with universal editing capabilities.
Our method employs off-the-shelf text-based editing models of 2D images to modify the 3D scene images.
Our method achieves multiple editing types, including but not limited to appearance editing, weather transition, material changing, and style transfer.
arXiv Detail & Related papers (2023-09-10T02:31:50Z) - MagicProp: Diffusion-based Video Editing via Motion-aware Appearance
Propagation [74.32046206403177]
MagicProp disentangles the video editing process into two stages: appearance editing and motion-aware appearance propagation.
In the first stage, MagicProp selects a single frame from the input video and applies image-editing techniques to modify the content and/or style of the frame.
In the second stage, MagicProp employs the edited frame as an appearance reference and generates the remaining frames using an autoregressive rendering approach.
arXiv Detail & Related papers (2023-09-02T11:13:29Z) - MagicEdit: High-Fidelity and Temporally Coherent Video Editing [70.55750617502696]
We present MagicEdit, a surprisingly simple yet effective solution to the text-guided video editing task.
We found that high-fidelity and temporally coherent video-to-video translation can be achieved by explicitly disentangling the learning of content, structure and motion signals during training.
arXiv Detail & Related papers (2023-08-28T17:56:22Z) - StableVideo: Text-driven Consistency-aware Diffusion Video Editing [24.50933856309234]
Diffusion-based methods can generate realistic images and videos, but they struggle to edit existing objects in a video while preserving their appearance over time.
This paper introduces temporal dependency to existing text-driven diffusion models, which allows them to generate consistent appearance for the edited objects.
We build up a text-driven video editing framework based on this mechanism, namely StableVideo, which can achieve consistency-aware video editing.
arXiv Detail & Related papers (2023-08-18T14:39:16Z) - InFusion: Inject and Attention Fusion for Multi Concept Zero-Shot
Text-based Video Editing [27.661609140918916]
InFusion is a framework for zero-shot text-based video editing.
It supports editing of multiple concepts with pixel-level control over diverse concepts mentioned in the editing prompt.
Our framework is a low-cost alternative to one-shot tuned models for editing since it does not require training.
arXiv Detail & Related papers (2023-07-22T17:05:47Z) - Edit-A-Video: Single Video Editing with Object-Aware Consistency [49.43316939996227]
We propose a video editing framework given only a pretrained TTI model and a single text, video> pair, which we term Edit-A-Video.
The framework consists of two stages: (1) inflating the 2D model into the 3D model by appending temporal modules tuning and on the source video (2) inverting the source video into the noise and editing with target text prompt and attention map injection.
We present extensive experimental results over various types of text and videos, and demonstrate the superiority of the proposed method compared to baselines in terms of background consistency, text alignment, and video editing quality.
arXiv Detail & Related papers (2023-03-14T14:35:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.