Unsupervised Coordinate-Based Video Denoising
- URL: http://arxiv.org/abs/2307.00179v1
- Date: Sat, 1 Jul 2023 00:11:40 GMT
- Title: Unsupervised Coordinate-Based Video Denoising
- Authors: Mary Damilola Aiyetigbo, Dineshchandar Ravichandran, Reda Chalhoub,
Peter Kalivas, Nianyi Li
- Abstract summary: We introduce a novel unsupervised video denoising deep learning approach that can help to mitigate data scarcity issues.
Our method comprises three modules: a Feature generator creating features maps, a Denoise-Net generating denoised but slightly blurry reference frames, and a Refine-Net re-introducing high-frequency details.
- Score: 2.867801048665443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a novel unsupervised video denoising deep
learning approach that can help to mitigate data scarcity issues and shows
robustness against different noise patterns, enhancing its broad applicability.
Our method comprises three modules: a Feature generator creating features maps,
a Denoise-Net generating denoised but slightly blurry reference frames, and a
Refine-Net re-introducing high-frequency details. By leveraging the
coordinate-based network, we can greatly simplify the network structure while
preserving high-frequency details in the denoised video frames. Extensive
experiments on both simulated and real-captured demonstrate that our method can
effectively denoise real-world calcium imaging video sequences without prior
knowledge of noise models and data augmentation during training.
Related papers
- Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers [30.965705043127144]
In this paper, we propose a novel unsupervised video denoising framework, named Temporal As aTAP' (TAP)
By incorporating temporal modules, our method can harness temporal information across noisy frames, complementing its power of spatial denoising.
Compared to other unsupervised video denoising methods, our framework demonstrates superior performance on both sRGB and raw video denoising datasets.
arXiv Detail & Related papers (2024-09-17T15:05:33Z) - Unsupervised Microscopy Video Denoising [40.12041881289585]
We introduce a novel unsupervised network to denoise microscopy videos featured by image sequences captured by a fixed location microscopy camera.
We propose a DeepTemporal Interpolation method, leveraging a temporal signal filter integrated into the bottom CNN layers, to restore microscopy videos corrupted by unknown noise types.
Our unsupervised denoising architecture is distinguished by its ability to adapt to multiple noise conditions without the need for pre-existing noise distribution knowledge.
arXiv Detail & Related papers (2024-04-17T17:38:54Z) - Reconstruct-and-Generate Diffusion Model for Detail-Preserving Image
Denoising [16.43285056788183]
We propose a novel approach called the Reconstruct-and-Generate Diffusion Model (RnG)
Our method leverages a reconstructive denoising network to recover the majority of the underlying clean signal.
It employs a diffusion algorithm to generate residual high-frequency details, thereby enhancing visual quality.
arXiv Detail & Related papers (2023-09-19T16:01:20Z) - Zero-shot Blind Image Denoising via Implicit Neural Representations [77.79032012459243]
We propose an alternative denoising strategy that leverages the architectural inductive bias of implicit neural representations (INRs)
We show that our method outperforms existing zero-shot denoising methods under an extensive set of low-noise or real-noise scenarios.
arXiv Detail & Related papers (2022-04-05T12:46:36Z) - Practical Blind Image Denoising via Swin-Conv-UNet and Data Synthesis [148.16279746287452]
We propose a swin-conv block to incorporate the local modeling ability of residual convolutional layer and non-local modeling ability of swin transformer block.
For the training data synthesis, we design a practical noise degradation model which takes into consideration different kinds of noise.
Experiments on AGWN removal and real image denoising demonstrate that the new network architecture design achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-03-24T18:11:31Z) - Dynamic Slimmable Denoising Network [64.77565006158895]
Dynamic slimmable denoising network (DDSNet) is a general method to achieve good denoising quality with less computational complexity.
OurNet is empowered with the ability of dynamic inference by a dynamic gate.
Our experiments demonstrate our-Net consistently outperforms the state-of-the-art individually trained static denoising networks.
arXiv Detail & Related papers (2021-10-17T22:45:33Z) - Learning a Model-Driven Variational Network for Deformable Image
Registration [89.9830129923847]
VR-Net is a novel cascaded variational network for unsupervised deformable image registration.
It outperforms state-of-the-art deep learning methods on registration accuracy.
It maintains the fast inference speed of deep learning and the data-efficiency of variational model.
arXiv Detail & Related papers (2021-05-25T21:37:37Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - Spatial-Adaptive Network for Single Image Denoising [14.643663950015334]
We propose a novel spatial-adaptive denoising network (SADNet) for efficient single image blind noise removal.
Our method can surpass the state-of-the-art denoising methods both quantitatively and visually.
arXiv Detail & Related papers (2020-01-28T12:24:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.