HaineiFRDM: Explore Diffusion to Restore Defects in Fast-Movement Films
- URL: http://arxiv.org/abs/2512.24946v1
- Date: Wed, 31 Dec 2025 16:18:07 GMT
- Title: HaineiFRDM: Explore Diffusion to Restore Defects in Fast-Movement Films
- Authors: Rongji Xun, Junjie Yuan, Zhongjie Wang,
- Abstract summary: Existing open-source film restoration methods show limited performance compared to commercial methods.<n>We propose HaineiFRDM, a film restoration framework, to explore diffusion model's powerful content-understanding ability.
- Score: 2.3374825727995328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing open-source film restoration methods show limited performance compared to commercial methods due to training with low-quality synthetic data and employing noisy optical flows. In addition, high-resolution films have not been explored by the open-source methods.We propose HaineiFRDM(Film Restoration Diffusion Model), a film restoration framework, to explore diffusion model's powerful content-understanding ability to help human expert better restore indistinguishable film defects.Specifically, we employ a patch-wise training and testing strategy to make restoring high-resolution films on one 24GB-VRAMR GPU possible and design a position-aware Global Prompt and Frame Fusion Modules.Also, we introduce a global-local frequency module to reconstruct consistent textures among different patches. Besides, we firstly restore a low-resolution result and use it as global residual to mitigate blocky artifacts caused by patching process.Furthermore, we construct a film restoration dataset that contains restored real-degraded films and realistic synthetic data.Comprehensive experimental results conclusively demonstrate the superiority of our model in defect restoration ability over existing open-source methods. Code and the dataset will be released.
Related papers
- MoA-VR: A Mixture-of-Agents System Towards All-in-One Video Restoration [62.929029990341796]
Real-world videos often suffer from complex degradations, such as noise, compression artifacts, and low-light distortions.<n>We propose MoA-VR, which mimics the reasoning and processing procedures of human professionals through three coordinated agents.<n>Specifically, we construct a large-scale and high-resolution video degradation recognition benchmark and build a vision-language model (VLM) driven degradation identifier.
arXiv Detail & Related papers (2025-10-09T17:42:51Z) - Temporal-Consistent Video Restoration with Pre-trained Diffusion Models [51.47188802535954]
Video restoration (VR) aims to recover high-quality videos from degraded ones.<n>Recent zero-shot VR methods using pre-trained diffusion models (DMs) suffer from approximation errors during reverse diffusion and insufficient temporal consistency.<n>We present a novel a Posterior Maximum (MAP) framework that directly parameterizes video frames in the seed space of DMs, eliminating approximation errors.
arXiv Detail & Related papers (2025-03-19T03:41:56Z) - TDM: Temporally-Consistent Diffusion Model for All-in-One Real-World Video Restoration [13.49297560533422]
Our method can restore various types of video degradation with a single unified model.<n>Our method advances the video restoration task by providing a unified solution that enhances video quality across multiple applications.
arXiv Detail & Related papers (2025-01-04T12:15:37Z) - Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - DiffIR2VR-Zero: Zero-Shot Video Restoration with Diffusion-based Image Restoration Models [9.145545884814327]
We present DiffIR2VR-Zero, a zero-shot framework that enables any pre-trained image restoration model to perform high-quality video restoration without additional training.<n>Our framework works with any image restoration diffusion model, providing a versatile solution for video enhancement without task-specific training or modifications.
arXiv Detail & Related papers (2024-07-01T17:59:12Z) - Cross-Consistent Deep Unfolding Network for Adaptive All-In-One Video
Restoration [78.14941737723501]
We propose a Cross-consistent Deep Unfolding Network (CDUN) for All-In-One VR.
By orchestrating two cascading procedures, CDUN achieves adaptive processing for diverse degradations.
In addition, we introduce a window-based inter-frame fusion strategy to utilize information from more adjacent frames.
arXiv Detail & Related papers (2023-09-04T14:18:00Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Restoration of User Videos Shared on Social Media [27.16457737969977]
User videos shared on social media platforms usually suffer from degradations caused by unknown proprietary processing procedures.
This paper presents a new general video restoration framework for the restoration of user videos shared on social media platforms.
In contrast to most deep learning-based video restoration methods that perform end-to-end mapping, our new method, Video restOration through adapTive dEgradation Sensing (VOTES), introduces the concept of a degradation feature map (DFM) to explicitly guide the video restoration process.
arXiv Detail & Related papers (2022-08-18T02:28:43Z) - Bringing Old Films Back to Life [33.78936333249432]
We present a learning-based framework, recurrent transformer network (RTN), to restore heavily degraded old films.
Our method is based on the hidden knowledge learned from adjacent frames that contain abundant information about the occlusion.
arXiv Detail & Related papers (2022-03-31T17:59:59Z) - Investigating Tradeoffs in Real-World Video Super-Resolution [90.81396836308085]
Real-world video super-resolution (VSR) models are often trained with diverse degradations to improve generalizability.
To alleviate the first tradeoff, we propose a degradation scheme that reduces up to 40% of training time without sacrificing performance.
To facilitate fair comparisons, we propose the new VideoLQ dataset, which contains a large variety of real-world low-quality video sequences.
arXiv Detail & Related papers (2021-11-24T18:58:21Z) - Attention Based Real Image Restoration [48.933507352496726]
Deep convolutional neural networks perform better on images containing synthetic degradations.
This paper proposes a novel single-stage blind real image restoration network (R$2$Net)
arXiv Detail & Related papers (2020-04-26T04:21:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.