InstructVideo: Instructing Video Diffusion Models with Human Feedback
- URL: http://arxiv.org/abs/2312.12490v1
- Date: Tue, 19 Dec 2023 17:55:16 GMT
- Title: InstructVideo: Instructing Video Diffusion Models with Human Feedback
- Authors: Hangjie Yuan, Shiwei Zhang, Xiang Wang, Yujie Wei, Tao Feng, Yining
Pan, Yingya Zhang, Ziwei Liu, Samuel Albanie, Dong Ni
- Abstract summary: We propose InstructVideo to instruct text-to-video diffusion models with human feedback by reward fine-tuning.
InstructVideo has two key ingredients: 1) To ameliorate the cost of reward fine-tuning induced by generating through the full DDIM sampling chain, we recast reward fine-tuning as editing.
- Score: 65.9590462317474
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have emerged as the de facto paradigm for video generation.
However, their reliance on web-scale data of varied quality often yields
results that are visually unappealing and misaligned with the textual prompts.
To tackle this problem, we propose InstructVideo to instruct text-to-video
diffusion models with human feedback by reward fine-tuning. InstructVideo has
two key ingredients: 1) To ameliorate the cost of reward fine-tuning induced by
generating through the full DDIM sampling chain, we recast reward fine-tuning
as editing. By leveraging the diffusion process to corrupt a sampled video,
InstructVideo requires only partial inference of the DDIM sampling chain,
reducing fine-tuning cost while improving fine-tuning efficiency. 2) To
mitigate the absence of a dedicated video reward model for human preferences,
we repurpose established image reward models, e.g., HPSv2. To this end, we
propose Segmental Video Reward, a mechanism to provide reward signals based on
segmental sparse sampling, and Temporally Attenuated Reward, a method that
mitigates temporal modeling degradation during fine-tuning. Extensive
experiments, both qualitative and quantitative, validate the practicality and
efficacy of using image reward models in InstructVideo, significantly enhancing
the visual quality of generated videos without compromising generalization
capabilities. Code and models will be made publicly available.
Related papers
- Your Image is Secretly the Last Frame of a Pseudo Video [20.161039114393148]
We investigate the possibility of improving other types of generative models with pseudo videos.
Specifically, we first extend a given image generative model to their video generative model counterpart, and then train the video generative model on pseudo videos constructed by applying data augmentation to the original images.
arXiv Detail & Related papers (2024-10-26T12:15:25Z) - VideoGuide: Improving Video Diffusion Models without Training Through a Teacher's Guide [48.22321420680046]
VideoGuide is a novel framework that enhances the temporal consistency of pretrained text-to-video (T2V) models.
It improves temporal quality by interpolating the guiding model's denoised samples into the sampling model's denoising process.
The proposed method brings about significant improvement in temporal consistency and image fidelity.
arXiv Detail & Related papers (2024-10-06T05:46:17Z) - SF-V: Single Forward Video Generation Model [57.292575082410785]
We propose a novel approach to obtain single-step video generation models by leveraging adversarial training to fine-tune pre-trained models.
Experiments demonstrate that our method achieves competitive generation quality of synthesized videos with significantly reduced computational overhead.
arXiv Detail & Related papers (2024-06-06T17:58:27Z) - Efficient Video Diffusion Models via Content-Frame Motion-Latent Decomposition [124.41196697408627]
We propose content-motion latent diffusion model (CMD), a novel efficient extension of pretrained image diffusion models for video generation.
CMD encodes a video as a combination of a content frame (like an image) and a low-dimensional motion latent representation.
We generate the content frame by fine-tuning a pretrained image diffusion model, and we generate the motion latent representation by training a new lightweight diffusion model.
arXiv Detail & Related papers (2024-03-21T05:48:48Z) - Fine-Tuning of Continuous-Time Diffusion Models as Entropy-Regularized
Control [54.132297393662654]
Diffusion models excel at capturing complex data distributions, such as those of natural images and proteins.
While diffusion models are trained to represent the distribution in the training dataset, we often are more concerned with other properties, such as the aesthetic quality of the generated images.
We present theoretical and empirical evidence that demonstrates our framework is capable of efficiently generating diverse samples with high genuine rewards.
arXiv Detail & Related papers (2024-02-23T08:54:42Z) - BIVDiff: A Training-Free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models [40.73982918337828]
We propose a training-free general-purpose video synthesis framework, coined as bf BIVDiff, via bridging specific image diffusion models and general text-to-video foundation diffusion models.
Specifically, we first use a specific image diffusion model (e.g., ControlNet and Instruct Pix2Pix) for frame-wise video generation, then perform Mixed Inversion on the generated video, and finally input the inverted latents into the video diffusion models.
arXiv Detail & Related papers (2023-12-05T14:56:55Z) - Conditional Modeling Based Automatic Video Summarization [70.96973928590958]
The aim of video summarization is to shorten videos automatically while retaining the key information necessary to convey the overall story.
Video summarization methods rely on visual factors, such as visual consecutiveness and diversity, which may not be sufficient to fully understand the content of the video.
A new approach to video summarization is proposed based on insights gained from how humans create ground truth video summaries.
arXiv Detail & Related papers (2023-11-20T20:24:45Z) - Unsupervised Video Anomaly Detection with Diffusion Models Conditioned
on Compact Motion Representations [17.816344808780965]
unsupervised video anomaly detection (VAD) problem involves classifying each frame in a video as normal or abnormal, without any access to labels.
To accomplish this, proposed method employs conditional diffusion models, where the input data is features extracted from pre-trained network.
Our method utilizes a data-driven threshold and considers a high reconstruction error as an indicator of anomalous events.
arXiv Detail & Related papers (2023-07-04T07:36:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.