Learning Task-Oriented Flows to Mutually Guide Feature Alignment in
Synthesized and Real Video Denoising
- URL: http://arxiv.org/abs/2208.11803v3
- Date: Sat, 25 Mar 2023 09:19:34 GMT
- Title: Learning Task-Oriented Flows to Mutually Guide Feature Alignment in
Synthesized and Real Video Denoising
- Authors: Jiezhang Cao, Qin Wang, Jingyun Liang, Yulun Zhang, Kai Zhang, Radu
Timofte, Luc Van Gool
- Abstract summary: Video denoising aims at removing noise from videos to recover clean ones.
Some existing works show that optical flow can help the denoising by exploiting the additional spatial-temporal clues from nearby frames.
We propose a new multi-scale refined optical flow-guided video denoising method, which is more robust to different noise levels.
- Score: 137.5080784570804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video denoising aims at removing noise from videos to recover clean ones.
Some existing works show that optical flow can help the denoising by exploiting
the additional spatial-temporal clues from nearby frames. However, the flow
estimation itself is also sensitive to noise, and can be unusable under large
noise levels. To this end, we propose a new multi-scale refined optical
flow-guided video denoising method, which is more robust to different noise
levels. Our method mainly consists of a denoising-oriented flow refinement
(DFR) module and a flow-guided mutual denoising propagation (FMDP) module.
Unlike previous works that directly use off-the-shelf flow solutions, DFR first
learns robust multi-scale optical flows, and FMDP makes use of the flow
guidance by progressively introducing and refining more flow information from
low resolution to high resolution. Together with real noise degradation
synthesis, the proposed multi-scale flow-guided denoising network achieves
state-of-the-art performance on both synthetic Gaussian denoising and real
video denoising. The codes will be made publicly available.
Related papers
- Denoising Reuse: Exploiting Inter-frame Motion Consistency for Efficient Video Latent Generation [36.098738197088124]
This work presents a Diffusion Reuse MOtion network to accelerate latent video generation.
coarse-grained noises in earlier denoising steps have demonstrated high motion consistency across consecutive video frames.
Dr. Mo propagates those coarse-grained noises onto the next frame by incorporating carefully designed, lightweight inter-frame motions.
arXiv Detail & Related papers (2024-09-19T07:50:34Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Multi-stage image denoising with the wavelet transform [125.2251438120701]
Deep convolutional neural networks (CNNs) are used for image denoising via automatically mining accurate structure information.
We propose a multi-stage image denoising CNN with the wavelet transform (MWDCNN) via three stages, i.e., a dynamic convolutional block (DCB), two cascaded wavelet transform and enhancement blocks (WEBs) and residual block (RB)
arXiv Detail & Related papers (2022-09-26T03:28:23Z) - MANet: Improving Video Denoising with a Multi-Alignment Network [72.93429911044903]
We present a multi-alignment network, which generates multiple flow proposals followed by attention-based averaging.
Experiments on a large-scale video dataset demonstrate that our method improves the denoising baseline model by 0.2dB.
arXiv Detail & Related papers (2022-02-20T00:52:07Z) - FINO: Flow-based Joint Image and Noise Model [23.9749061109964]
Flow-based joint Image and NOise model (FINO)
We propose a novel Flow-based joint Image and NOise model (FINO) that distinctly decouples the image and noise in the latent space and losslessly reconstructs them via a series of invertible transformations.
arXiv Detail & Related papers (2021-11-11T02:51:54Z) - Learning Model-Blind Temporal Denoisers without Ground Truths [46.778450578529814]
Denoisers trained with synthetic data often fail to cope with the diversity of unknown noises.
Previous image-based method leads to noise overfitting if directly applied to video denoisers.
We propose a general framework for video denoising networks that successfully addresses these challenges.
arXiv Detail & Related papers (2020-07-07T07:19:48Z) - Flexible Image Denoising with Multi-layer Conditional Feature Modulation [56.018132592622706]
We present a novel flexible image enoising network (CFMNet) by equipping an U-Net backbone with conditional feature modulation (CFM) modules.
In comparison to channel-wise shifting only in the first layer, CFMNet can make better use of noise level information by deploying multiple layers of CFM.
Our CFMNet is effective in exploiting noise level information for flexible non-blind denoising, and performs favorably against the existing deep image denoising methods in terms of both quantitative metrics and visual quality.
arXiv Detail & Related papers (2020-06-24T06:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.