PVDD: A Practical Video Denoising Dataset with Real-World Dynamic Scenes
- URL: http://arxiv.org/abs/2207.01356v1
- Date: Mon, 4 Jul 2022 12:30:22 GMT
- Title: PVDD: A Practical Video Denoising Dataset with Real-World Dynamic Scenes
- Authors: Xiaogang Xu, Yitong Yu, Nianjuan Jiang, Jiangbo Lu, Bei Yu, Jiaya Jia
- Abstract summary: "Practical Video Denoising dataset" (PVDD) contains 200 noisy-clean dynamic video pairs in both sRGB and RAW format.
Compared with existing datasets consisting of limited motion information,PVDD covers dynamic scenes with varying natural motion.
- Score: 56.4361151691284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To facilitate video denoising research, we construct a compelling dataset,
namely, "Practical Video Denoising Dataset" (PVDD), containing 200 noisy-clean
dynamic video pairs in both sRGB and RAW format. Compared with existing
datasets consisting of limited motion information, PVDD covers dynamic scenes
with varying and natural motion. Different from datasets using primary Gaussian
or Poisson distributions to synthesize noise in the sRGB domain, PVDD
synthesizes realistic noise from the RAW domain with a physically meaningful
sensor noise model followed by ISP processing. Moreover, based on this dataset,
we propose a shuffle-based practical degradation model to enhance the
performance of video denoising networks on real-world sRGB videos. Extensive
experiments demonstrate that models trained on PVDD achieve superior denoising
performance on many challenging real-world videos than on models trained on
other existing datasets.
Related papers
- BVI-RLV: A Fully Registered Dataset and Benchmarks for Low-Light Video Enhancement [56.97766265018334]
This paper introduces a low-light video dataset, consisting of 40 scenes with various motion scenarios under two distinct low-lighting conditions.
We provide fully registered ground truth data captured in normal light using a programmable motorized dolly and refine it via an image-based approach for pixel-wise frame alignment across different light levels.
Our experimental results demonstrate the significance of fully registered video pairs for low-light video enhancement (LLVE) and the comprehensive evaluation shows that the models trained with our dataset outperform those trained with the existing datasets.
arXiv Detail & Related papers (2024-07-03T22:41:49Z) - NegVSR: Augmenting Negatives for Generalized Noise Modeling in
Real-World Video Super-Resolution [12.103035018456566]
Video super-resolution (VSR) to synthesize high-resolution (HR) video from ideal datasets has been demonstrated in many works.
Applying VSR model to real-world video with unknown and complex degradation remains a challenging task.
We propose a Negatives augmentation strategy for generalized noise modeling in Video Super-Resolution (NegVSR) task.
arXiv Detail & Related papers (2023-05-24T03:23:35Z) - Realistic Noise Synthesis with Diffusion Models [68.48859665320828]
Deep image denoising models often rely on large amount of training data for the high quality performance.
We propose a novel method that synthesizes realistic noise using diffusion models, namely Realistic Noise Synthesize Diffusor (RNSD)
RNSD can incorporate guided multiscale content, such as more realistic noise with spatial correlations can be generated at multiple frequencies.
arXiv Detail & Related papers (2023-05-23T12:56:01Z) - RViDeformer: Efficient Raw Video Denoising Transformer with a Larger
Benchmark Dataset [16.131438855407175]
There is no large dataset with realistic motions for supervised raw video denoising.
We construct a video denoising dataset (named as ReCRVD) with 120 groups of noisy-clean videos.
We propose an efficient raw video denoising transformer network (RViDeformer) that explores both short and long-distance correlations.
arXiv Detail & Related papers (2023-05-01T11:06:58Z) - Towards Real-World Video Deblurring by Exploring Blur Formation Process [53.91239555063343]
In recent years, deep learning-based approaches have achieved promising success on video deblurring task.
The models trained on existing synthetic datasets still suffer from generalization problems over real-world blurry scenarios.
We propose a novel realistic blur synthesis pipeline termed RAW-Blur by leveraging blur formation cues.
arXiv Detail & Related papers (2022-08-28T09:24:52Z) - Learning Dynamic View Synthesis With Few RGBD Cameras [60.36357774688289]
We propose to utilize RGBD cameras to synthesize free-viewpoint videos of dynamic indoor scenes.
We generate point clouds from RGBD frames and then render them into free-viewpoint videos via a neural feature.
We introduce a simple Regional Depth-Inpainting module that adaptively inpaints missing depth values to render complete novel views.
arXiv Detail & Related papers (2022-04-22T03:17:35Z) - Supervised Raw Video Denoising with a Benchmark Dataset on Dynamic
Scenes [16.97140774983356]
We create motions for controllable objects, such as toys, and capture each static moment for multiple times to generate clean video frames.
To our knowledge, this is the first dynamic video dataset with noisy-clean pairs.
We propose a raw video denoising network (RViDeNet) by exploring the temporal, spatial, and channel correlations of video frames.
arXiv Detail & Related papers (2020-03-31T08:08:59Z) - CycleISP: Real Image Restoration via Improved Data Synthesis [166.17296369600774]
We present a framework that models camera imaging pipeline in forward and reverse directions.
By training a new image denoising network on realistic synthetic data, we achieve the state-of-the-art performance on real camera benchmark datasets.
arXiv Detail & Related papers (2020-03-17T15:20:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.