Meta Transferring for Deblurring
- URL: http://arxiv.org/abs/2210.08036v1
- Date: Fri, 14 Oct 2022 18:06:33 GMT
- Title: Meta Transferring for Deblurring
- Authors: Po-Sheng Liu, Fu-Jen Tsai, Yan-Tsung Peng, Chung-Chi Tsai, Chia-Wen
Lin, Yen-Yu Lin
- Abstract summary: We propose a reblur-de meta-transferring scheme to realize test-time adaptation without using ground truth for dynamic scene deblurring.
We leverage the blurred input video to find and use relatively sharp patches as the pseudo ground truth.
Our reblur-de meta-learning scheme can improve state-of-the-art deblurring models on the DVD, REDS, and RealBlur benchmark datasets.
- Score: 43.86235102507237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most previous deblurring methods were built with a generic model trained on
blurred images and their sharp counterparts. However, these approaches might
have sub-optimal deblurring results due to the domain gap between the training
and test sets. This paper proposes a reblur-deblur meta-transferring scheme to
realize test-time adaptation without using ground truth for dynamic scene
deblurring. Since the ground truth is usually unavailable at inference time in
a real-world scenario, we leverage the blurred input video to find and use
relatively sharp patches as the pseudo ground truth. Furthermore, we propose a
reblurring model to extract the homogenous blur from the blurred input and
transfer it to the pseudo-sharps to obtain the corresponding pseudo-blurred
patches for meta-learning and test-time adaptation with only a few gradient
updates. Extensive experimental results show that our reblur-deblur
meta-learning scheme can improve state-of-the-art deblurring models on the DVD,
REDS, and RealBlur benchmark datasets.
Related papers
- Appearance Blur-driven AutoEncoder and Motion-guided Memory Module for Video Anomaly Detection [14.315287192621662]
Video anomaly detection (VAD) often learns the distribution of normal samples and detects the anomaly through measuring significant deviations.
Most VADs cannot cope with cross-dataset validation for new target domains.
We propose a novel VAD method with a motion-guided memory module to achieve cross-dataset validation with zero-shot.
arXiv Detail & Related papers (2024-09-26T07:48:20Z) - Domain-adaptive Video Deblurring via Test-time Blurring [43.40607572991409]
We propose a domain adaptation scheme based on a blurring model to achieve test-time fine-tuning for deblurring models in unseen domains.
Since blurred and sharp pairs are unavailable for fine-tuning during inference, our scheme can generate domain-adaptive training pairs to calibrate a deblurring model for the target domain.
Our approach can significantly improve state-of-the-art video deblurring methods, providing performance gains of up to 7.54dB on various real-world video deblurring datasets.
arXiv Detail & Related papers (2024-07-12T07:28:01Z) - Towards Seamless Adaptation of Pre-trained Models for Visual Place Recognition [72.35438297011176]
We propose a novel method to realize seamless adaptation of pre-trained models for visual place recognition (VPR)
Specifically, to obtain both global and local features that focus on salient landmarks for discriminating places, we design a hybrid adaptation method.
Experimental results show that our method outperforms the state-of-the-art methods with less training data and training time.
arXiv Detail & Related papers (2024-02-22T12:55:01Z) - Rethinking Motion Deblurring Training: A Segmentation-Based Method for
Simulating Non-Uniform Motion Blurred Images [0.0]
We propose an efficient procedural methodology to generate sharp/blurred image pairs.
This allows generating virtually unlimited realistic and diverse training pairs.
We observed superior generalization performance for the ultimate task of deblurring real motion-blurred photos.
arXiv Detail & Related papers (2022-09-26T13:20:35Z) - Human and Scene Motion Deblurring using Pseudo-blur Synthesizer [17.36135319921425]
Present-day deep learning-based motion deblurring methods utilize the pair of synthetic blur and sharp data to regress any particular framework.
We provide an on-the-fly blurry data augmenter that can be run during training and test stages.
The proposed module is also equipped with hand-crafted prior extracted using the state-of-the-art human body statistical model.
arXiv Detail & Related papers (2021-11-25T04:56:13Z) - Breaking Shortcut: Exploring Fully Convolutional Cycle-Consistency for
Video Correspondence Learning [78.43196840793489]
We present a fully convolutional method, which is simpler and more coherent to the inference process.
We study the underline reason behind this collapse phenomenon, indicating that the absolute positions of pixels provide a shortcut to easily accomplish cycle-consistence.
arXiv Detail & Related papers (2021-05-12T17:52:45Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Domain-invariant Similarity Activation Map Contrastive Learning for
Retrieval-based Long-term Visual Localization [30.203072945001136]
In this work, a general architecture is first formulated probabilistically to extract domain invariant feature through multi-domain image translation.
And then a novel gradient-weighted similarity activation mapping loss (Grad-SAM) is incorporated for finer localization with high accuracy.
Extensive experiments have been conducted to validate the effectiveness of the proposed approach on the CMUSeasons dataset.
Our performance is on par with or even outperforms the state-of-the-art image-based localization baselines in medium or high precision.
arXiv Detail & Related papers (2020-09-16T14:43:22Z) - Deblurring by Realistic Blurring [110.54173799114785]
We propose a new method which combines two GAN models, i.e., a learning-to-blurr GAN (BGAN) and learning-to-DeBlur GAN (DBGAN)
The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images.
As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images.
arXiv Detail & Related papers (2020-04-04T05:25:15Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.