Restore from Restored: Video Restoration with Pseudo Clean Video
- URL: http://arxiv.org/abs/2003.04279v3
- Date: Mon, 15 Mar 2021 04:46:32 GMT
- Title: Restore from Restored: Video Restoration with Pseudo Clean Video
- Authors: Seunghwan Lee, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
- Abstract summary: We propose a self-supervised video denoising method called "restore-from-restored"
This method fine-tunes a pre-trained network by using a pseudo clean video during the test phase.
We analyze the restoration performance of the fine-tuned video denoising networks with the proposed self-supervision-based learning algorithm.
- Score: 28.057705167363327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we propose a self-supervised video denoising method called
"restore-from-restored." This method fine-tunes a pre-trained network by using
a pseudo clean video during the test phase. The pseudo clean video is obtained
by applying a noisy video to the baseline network. By adopting a fully
convolutional neural network (FCN) as the baseline, we can improve video
denoising performance without accurate optical flow estimation and registration
steps, in contrast to many conventional video restoration methods, due to the
translation equivariant property of the FCN. Specifically, the proposed method
can take advantage of plentiful similar patches existing across multiple
consecutive frames (i.e., patch-recurrence); these patches can boost the
performance of the baseline network by a large margin. We analyze the
restoration performance of the fine-tuned video denoising networks with the
proposed self-supervision-based learning algorithm, and demonstrate that the
FCN can utilize recurring patches without requiring accurate registration among
adjacent frames. In our experiments, we apply the proposed method to
state-of-the-art denoisers and show that our fine-tuned networks achieve a
considerable improvement in denoising performance.
Related papers
- Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers [30.965705043127144]
In this paper, we propose a novel unsupervised video denoising framework, named Temporal As aTAP' (TAP)
By incorporating temporal modules, our method can harness temporal information across noisy frames, complementing its power of spatial denoising.
Compared to other unsupervised video denoising methods, our framework demonstrates superior performance on both sRGB and raw video denoising datasets.
arXiv Detail & Related papers (2024-09-17T15:05:33Z) - Video Restoration with a Deep Plug-and-Play Prior [3.058685580689605]
This paper presents a novel method for restoring digital videos via a Deep Plug-and-Play (Play) approach.
Under a formalism, the method consists in using a deep convolutional Bayesian denoising network in place of an operator of the prior.
Our experiments in video deblurring, super-resolution, and proximal random missing pixels show a clear benefit to using a network specifically designed for video denoising.
arXiv Detail & Related papers (2022-09-06T23:31:20Z) - Learning Task-Oriented Flows to Mutually Guide Feature Alignment in
Synthesized and Real Video Denoising [137.5080784570804]
Video denoising aims at removing noise from videos to recover clean ones.
Some existing works show that optical flow can help the denoising by exploiting the additional spatial-temporal clues from nearby frames.
We propose a new multi-scale refined optical flow-guided video denoising method, which is more robust to different noise levels.
arXiv Detail & Related papers (2022-08-25T00:09:18Z) - Real-time Streaming Video Denoising with Bidirectional Buffers [48.57108807146537]
Real-time denoising algorithms are typically adopted on the user device to remove the noise involved during the shooting and transmission of video streams.
Recent multi-output inference works propagate the bidirectional temporal feature with a parallel or recurrent framework.
We propose a Bidirectional Streaming Video Denoising framework, to achieve high-fidelity real-time denoising for streaming videos with both past and future temporal receptive fields.
arXiv Detail & Related papers (2022-07-14T14:01:03Z) - Dynamic Slimmable Denoising Network [64.77565006158895]
Dynamic slimmable denoising network (DDSNet) is a general method to achieve good denoising quality with less computational complexity.
OurNet is empowered with the ability of dynamic inference by a dynamic gate.
Our experiments demonstrate our-Net consistently outperforms the state-of-the-art individually trained static denoising networks.
arXiv Detail & Related papers (2021-10-17T22:45:33Z) - Diffusion-Based Representation Learning [65.55681678004038]
We augment the denoising score matching framework to enable representation learning without any supervised signal.
In contrast, the introduced diffusion-based representation learning relies on a new formulation of the denoising score matching objective.
Using the same approach, we propose to learn an infinite-dimensional latent code that achieves improvements of state-of-the-art models on semi-supervised image classification.
arXiv Detail & Related papers (2021-05-29T09:26:02Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - Unsupervised Deep Video Denoising [26.864842892466136]
Deep convolutional neural networks (CNNs) currently achieve state-of-the-art performance in denoising videos.
We build on recent advances in unsupervised still image denoising to develop an Unsupervised Deep Video Denoiser (UDVD)
UDVD is shown to perform competitively with current state-of-the-art supervised methods on benchmark datasets.
arXiv Detail & Related papers (2020-11-30T17:45:08Z) - A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation [86.35434065681925]
This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA)
Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames.
Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
arXiv Detail & Related papers (2020-10-02T11:40:09Z) - Self-Supervised training for blind multi-frame video denoising [15.078027648304115]
We propose a self-supervised approach for training multi-frame video denoising networks.
Our approach benefits from the video temporal consistency by penalizing a loss between the predicted frame t and a neighboring target frame.
arXiv Detail & Related papers (2020-04-15T09:08:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.