Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers
- URL: http://arxiv.org/abs/2409.11256v1
- Date: Tue, 17 Sep 2024 15:05:33 GMT
- Title: Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers
- Authors: Zixuan Fu, Lanqing Guo, Chong Wang, Yufei Wang, Zhihao Li, Bihan Wen,
- Abstract summary: In this paper, we propose a novel unsupervised video denoising framework, named Temporal As aTAP' (TAP)
By incorporating temporal modules, our method can harness temporal information across noisy frames, complementing its power of spatial denoising.
Compared to other unsupervised video denoising methods, our framework demonstrates superior performance on both sRGB and raw video denoising datasets.
- Score: 30.965705043127144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in deep learning have shown impressive results in image and video denoising, leveraging extensive pairs of noisy and noise-free data for supervision. However, the challenge of acquiring paired videos for dynamic scenes hampers the practical deployment of deep video denoising techniques. In contrast, this obstacle is less pronounced in image denoising, where paired data is more readily available. Thus, a well-trained image denoiser could serve as a reliable spatial prior for video denoising. In this paper, we propose a novel unsupervised video denoising framework, named ``Temporal As a Plugin'' (TAP), which integrates tunable temporal modules into a pre-trained image denoiser. By incorporating temporal modules, our method can harness temporal information across noisy frames, complementing its power of spatial denoising. Furthermore, we introduce a progressive fine-tuning strategy that refines each temporal module using the generated pseudo clean video frames, progressively enhancing the network's denoising performance. Compared to other unsupervised video denoising methods, our framework demonstrates superior performance on both sRGB and raw video denoising datasets.
Related papers
- Unsupervised Coordinate-Based Video Denoising [2.867801048665443]
We introduce a novel unsupervised video denoising deep learning approach that can help to mitigate data scarcity issues.
Our method comprises three modules: a Feature generator creating features maps, a Denoise-Net generating denoised but slightly blurry reference frames, and a Refine-Net re-introducing high-frequency details.
arXiv Detail & Related papers (2023-07-01T00:11:40Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Low Latency Video Denoising for Online Conferencing Using CNN
Architectures [4.7805617044617446]
We propose a pipeline for real-time video denoising with low runtime cost and high perceptual quality.
A custom noise detector analyzer provides real-time feedback to adapt the weights and improve the models' output.
arXiv Detail & Related papers (2023-02-17T00:55:54Z) - Learning Task-Oriented Flows to Mutually Guide Feature Alignment in
Synthesized and Real Video Denoising [137.5080784570804]
Video denoising aims at removing noise from videos to recover clean ones.
Some existing works show that optical flow can help the denoising by exploiting the additional spatial-temporal clues from nearby frames.
We propose a new multi-scale refined optical flow-guided video denoising method, which is more robust to different noise levels.
arXiv Detail & Related papers (2022-08-25T00:09:18Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Synergy Between Semantic Segmentation and Image Denoising via Alternate
Boosting [102.19116213923614]
We propose a boosting network to perform denoising and segmentation alternately.
We observe that not only denoising helps combat the drop of segmentation accuracy due to noise, but also pixel-wise semantic information boosts the capability of denoising.
Experimental results show that the denoised image quality is improved substantially and the segmentation accuracy is improved to close to that of clean images.
arXiv Detail & Related papers (2021-02-24T06:48:45Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Restore from Restored: Video Restoration with Pseudo Clean Video [28.057705167363327]
We propose a self-supervised video denoising method called "restore-from-restored"
This method fine-tunes a pre-trained network by using a pseudo clean video during the test phase.
We analyze the restoration performance of the fine-tuned video denoising networks with the proposed self-supervision-based learning algorithm.
arXiv Detail & Related papers (2020-03-09T17:37:28Z) - Restore from Restored: Single Image Denoising with Pseudo Clean Image [28.38369890008251]
We propose a simple and effective fine-tuning algorithm called "restore-from-restored"
Our method can be easily employed on top of the state-of-the-art denoising networks.
arXiv Detail & Related papers (2020-03-09T17:35:31Z) - First image then video: A two-stage network for spatiotemporal video
denoising [19.842488445174524]
Video denoising is to remove noise from noise-corrupted data, thus recovering true motion signals.
Existing approaches for video denoising tend to suffer from blur artifacts, that is the boundary of a moving object tends to appear blurry.
This paper introduces a first-image-then-video two-stage denoising neural network, consisting of an image denoising module and a regular intratemporal video denoising module.
It yields state-of-the-art performances on the video denoising Vimeo90K dataset in terms of both denoising quality and computation.
arXiv Detail & Related papers (2020-01-02T07:21:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.