Domain-adaptive Video Deblurring via Test-time Blurring
- URL: http://arxiv.org/abs/2407.09059v1
- Date: Fri, 12 Jul 2024 07:28:01 GMT
- Title: Domain-adaptive Video Deblurring via Test-time Blurring
- Authors: Jin-Ting He, Fu-Jen Tsai, Jia-Hao Wu, Yan-Tsung Peng, Chung-Chi Tsai, Chia-Wen Lin, Yen-Yu Lin,
- Abstract summary: We propose a domain adaptation scheme based on a blurring model to achieve test-time fine-tuning for deblurring models in unseen domains.
Since blurred and sharp pairs are unavailable for fine-tuning during inference, our scheme can generate domain-adaptive training pairs to calibrate a deblurring model for the target domain.
Our approach can significantly improve state-of-the-art video deblurring methods, providing performance gains of up to 7.54dB on various real-world video deblurring datasets.
- Score: 43.40607572991409
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic scene video deblurring aims to remove undesirable blurry artifacts captured during the exposure process. Although previous video deblurring methods have achieved impressive results, they suffer from significant performance drops due to the domain gap between training and testing videos, especially for those captured in real-world scenarios. To address this issue, we propose a domain adaptation scheme based on a blurring model to achieve test-time fine-tuning for deblurring models in unseen domains. Since blurred and sharp pairs are unavailable for fine-tuning during inference, our scheme can generate domain-adaptive training pairs to calibrate a deblurring model for the target domain. First, a Relative Sharpness Detection Module is proposed to identify relatively sharp regions from the blurry input images and regard them as pseudo-sharp images. Next, we utilize a blurring model to produce blurred images based on the pseudo-sharp images extracted during testing. To synthesize blurred images in compliance with the target data distribution, we propose a Domain-adaptive Blur Condition Generation Module to create domain-specific blur conditions for the blurring model. Finally, the generated pseudo-sharp and blurred pairs are used to fine-tune a deblurring model for better performance. Extensive experimental results demonstrate that our approach can significantly improve state-of-the-art video deblurring methods, providing performance gains of up to 7.54dB on various real-world video deblurring datasets. The source code is available at https://github.com/Jin-Ting-He/DADeblur.
Related papers
- DaBiT: Depth and Blur informed Transformer for Joint Refocusing and Super-Resolution [4.332534893042983]
In many real-world scenarios, recorded videos suffer from accidental focus blur.
This paper introduces a framework optimised for focal deblurring (refocusing) and video super-resolution (VSR)
We achieve state-of-the-art results with an average PSNR performance over 1.9dB greater than comparable existing video restoration methods.
arXiv Detail & Related papers (2024-07-01T12:22:16Z) - Real-World Efficient Blind Motion Deblurring via Blur Pixel Discretization [45.20189929583484]
We decompose the deblurring (regression) task into blur pixel discretization and discrete-to-continuous conversion tasks.
Specifically, we generate the discretized image residual errors by identifying the blur pixels and then transform them to a continuous form.
arXiv Detail & Related papers (2024-04-18T13:22:56Z) - ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation [45.582704677784825]
Implicit Diffusion-based reBLurring AUgmentation (ID-Blau) is proposed to generate diverse blurred images by simulating motion trajectories in a continuous space.
By sampling diverse blur conditions, ID-Blau can generate various blurred images unseen in the training set.
Results demonstrate that ID-Blau can produce realistic blurred images for training and thus significantly improve performance for state-of-the-art deblurring models.
arXiv Detail & Related papers (2023-12-18T07:47:43Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Meta Transferring for Deblurring [43.86235102507237]
We propose a reblur-de meta-transferring scheme to realize test-time adaptation without using ground truth for dynamic scene deblurring.
We leverage the blurred input video to find and use relatively sharp patches as the pseudo ground truth.
Our reblur-de meta-learning scheme can improve state-of-the-art deblurring models on the DVD, REDS, and RealBlur benchmark datasets.
arXiv Detail & Related papers (2022-10-14T18:06:33Z) - ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring [92.40655035360729]
Video deblurring models exploit consecutive frames to remove blurs from camera shakes and object motions.
We propose a novel implicit method to learn spatial correspondence among blurry frames in the feature space.
Our proposed method is evaluated on the widely-adopted DVD dataset, along with a newly collected High-Frame-Rate (1000 fps) dataset for Video Deblurring.
arXiv Detail & Related papers (2021-03-07T04:33:13Z) - Single Image Non-uniform Blur Kernel Estimation via Adaptive Basis
Decomposition [1.854931308524932]
We propose a general, non-parametric model for dense non-uniform motion blur estimation.
We show that our method overcomes the limitations of existing non-uniform motion blur estimation.
arXiv Detail & Related papers (2021-02-01T18:02:31Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - Deblurring by Realistic Blurring [110.54173799114785]
We propose a new method which combines two GAN models, i.e., a learning-to-blurr GAN (BGAN) and learning-to-DeBlur GAN (DBGAN)
The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images.
As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images.
arXiv Detail & Related papers (2020-04-04T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.