Unsupervised Deep Video Denoising
- URL: http://arxiv.org/abs/2011.15045v2
- Date: Tue, 1 Dec 2020 04:25:50 GMT
- Title: Unsupervised Deep Video Denoising
- Authors: Dev Yashpal Sheth, Sreyas Mohan, Joshua L. Vincent, Ramon Manzorro,
Peter A. Crozier, Mitesh M. Khapra, Eero P. Simoncelli, Carlos
Fernandez-Granda
- Abstract summary: Deep convolutional neural networks (CNNs) currently achieve state-of-the-art performance in denoising videos.
We build on recent advances in unsupervised still image denoising to develop an Unsupervised Deep Video Denoiser (UDVD)
UDVD is shown to perform competitively with current state-of-the-art supervised methods on benchmark datasets.
- Score: 26.864842892466136
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks (CNNs) currently achieve state-of-the-art
performance in denoising videos. They are typically trained with supervision,
minimizing the error between the network output and ground-truth clean videos.
However, in many applications, such as microscopy, noiseless videos are not
available. To address these cases, we build on recent advances in unsupervised
still image denoising to develop an Unsupervised Deep Video Denoiser (UDVD).
UDVD is shown to perform competitively with current state-of-the-art supervised
methods on benchmark datasets, even when trained only on a single short noisy
video sequence. Experiments on fluorescence-microscopy and electron-microscopy
data illustrate the promise of our approach for imaging modalities where
ground-truth clean data is generally not available. In addition, we study the
mechanisms used by trained CNNs to perform video denoising. An analysis of the
gradient of the network output with respect to its input reveals that these
networks perform spatio-temporal filtering that is adapted to the particular
spatial structures and motion of the underlying content. We interpret this as
an implicit and highly effective form of motion compensation, a widely used
paradigm in traditional video denoising, compression, and analysis. Code and
iPython notebooks for our analysis are available in
https://sreyas-mohan.github.io/udvd/ .
Related papers
- Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers [30.965705043127144]
In this paper, we propose a novel unsupervised video denoising framework, named Temporal As aTAP' (TAP)
By incorporating temporal modules, our method can harness temporal information across noisy frames, complementing its power of spatial denoising.
Compared to other unsupervised video denoising methods, our framework demonstrates superior performance on both sRGB and raw video denoising datasets.
arXiv Detail & Related papers (2024-09-17T15:05:33Z) - SF-V: Single Forward Video Generation Model [57.292575082410785]
We propose a novel approach to obtain single-step video generation models by leveraging adversarial training to fine-tune pre-trained models.
Experiments demonstrate that our method achieves competitive generation quality of synthesized videos with significantly reduced computational overhead.
arXiv Detail & Related papers (2024-06-06T17:58:27Z) - Unsupervised Microscopy Video Denoising [40.12041881289585]
We introduce a novel unsupervised network to denoise microscopy videos featured by image sequences captured by a fixed location microscopy camera.
We propose a DeepTemporal Interpolation method, leveraging a temporal signal filter integrated into the bottom CNN layers, to restore microscopy videos corrupted by unknown noise types.
Our unsupervised denoising architecture is distinguished by its ability to adapt to multiple noise conditions without the need for pre-existing noise distribution knowledge.
arXiv Detail & Related papers (2024-04-17T17:38:54Z) - Unsupervised Coordinate-Based Video Denoising [2.867801048665443]
We introduce a novel unsupervised video denoising deep learning approach that can help to mitigate data scarcity issues.
Our method comprises three modules: a Feature generator creating features maps, a Denoise-Net generating denoised but slightly blurry reference frames, and a Refine-Net re-introducing high-frequency details.
arXiv Detail & Related papers (2023-07-01T00:11:40Z) - Real-time Streaming Video Denoising with Bidirectional Buffers [48.57108807146537]
Real-time denoising algorithms are typically adopted on the user device to remove the noise involved during the shooting and transmission of video streams.
Recent multi-output inference works propagate the bidirectional temporal feature with a parallel or recurrent framework.
We propose a Bidirectional Streaming Video Denoising framework, to achieve high-fidelity real-time denoising for streaming videos with both past and future temporal receptive fields.
arXiv Detail & Related papers (2022-07-14T14:01:03Z) - A Coding Framework and Benchmark towards Low-Bitrate Video Understanding [63.05385140193666]
We propose a traditional-neural mixed coding framework that takes advantage of both traditional codecs and neural networks (NNs)
The framework is optimized by ensuring that a transportation-efficient semantic representation of the video is preserved.
We build a low-bitrate video understanding benchmark with three downstream tasks on eight datasets, demonstrating the notable superiority of our approach.
arXiv Detail & Related papers (2022-02-06T16:29:15Z) - Deep Video Prior for Video Consistency and Propagation [58.250209011891904]
We present a novel and general approach for blind video temporal consistency.
Our method is only trained on a pair of original and processed videos directly instead of a large dataset.
We show that temporal consistency can be achieved by training a convolutional neural network on a video with Deep Video Prior.
arXiv Detail & Related papers (2022-01-27T16:38:52Z) - Blind Video Temporal Consistency via Deep Video Prior [61.062900556483164]
We present a novel and general approach for blind video temporal consistency.
Our method is only trained on a pair of original and processed videos directly.
We show that temporal consistency can be achieved by training a convolutional network on a video with the Deep Video Prior.
arXiv Detail & Related papers (2020-10-22T16:19:20Z) - Multiple Instance-Based Video Anomaly Detection using Deep Temporal
Encoding-Decoding [5.255783459833821]
We propose a weakly supervised deep temporal encoding-decoding solution for anomaly detection in surveillance videos.
The proposed approach uses both abnormal and normal video clips during the training phase.
The results show that the proposed method performs similar to or better than the state-of-the-art solutions for anomaly detection in video surveillance applications.
arXiv Detail & Related papers (2020-07-03T08:22:42Z) - Non-Adversarial Video Synthesis with Learned Priors [53.26777815740381]
We focus on the problem of generating videos from latent noise vectors, without any reference input frames.
We develop a novel approach that jointly optimize the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning.
Our approach generates superior quality videos compared to the existing state-of-the-art methods.
arXiv Detail & Related papers (2020-03-21T02:57:33Z) - Restore from Restored: Video Restoration with Pseudo Clean Video [28.057705167363327]
We propose a self-supervised video denoising method called "restore-from-restored"
This method fine-tunes a pre-trained network by using a pseudo clean video during the test phase.
We analyze the restoration performance of the fine-tuned video denoising networks with the proposed self-supervision-based learning algorithm.
arXiv Detail & Related papers (2020-03-09T17:37:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.