Multi-Scale Memory-Based Video Deblurring
- URL: http://arxiv.org/abs/2204.02977v1
- Date: Wed, 6 Apr 2022 08:48:56 GMT
- Title: Multi-Scale Memory-Based Video Deblurring
- Authors: Bo Ji and Angela Yao
- Abstract summary: We design a memory branch to memorize the blurry-sharp feature pairs in the memory bank.
To enrich the memory of our memory bank, we also designed a bidirectional recurrency and multi-scale strategy.
Experimental results demonstrate that our model outperforms other state-of-the-art methods.
- Score: 34.488707652997704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video deblurring has achieved remarkable progress thanks to the success of
deep neural networks. Most methods solve for the deblurring end-to-end with
limited information propagation from the video sequence. However, different
frame regions exhibit different characteristics and should be provided with
corresponding relevant information. To achieve fine-grained deblurring, we
designed a memory branch to memorize the blurry-sharp feature pairs in the
memory bank, thus providing useful information for the blurry query input. To
enrich the memory of our memory bank, we further designed a bidirectional
recurrency and multi-scale strategy based on the memory bank. Experimental
results demonstrate that our model outperforms other state-of-the-art methods
while keeping the model complexity and inference time low. The code is
available at https://github.com/jibo27/MemDeblur.
Related papers
- MatAnyone: Stable Video Matting with Consistent Memory Propagation [55.93983057352684]
MatAnyone is a robust framework tailored for target-assigned video matting.
We introduce a consistent memory propagation module via region-adaptive memory fusion.
For robust training, we present a larger, high-quality, and diverse dataset for video matting.
arXiv Detail & Related papers (2025-01-24T17:56:24Z) - MAMBA: Multi-level Aggregation via Memory Bank for Video Object
Detection [35.16197118579414]
We propose a multi-level aggregation architecture via memory bank called MAMBA.
Specifically, our memory bank employs two novel operations to eliminate the disadvantages of existing methods.
Compared with existing state-of-the-art methods, our method achieves superior performance in terms of both speed and accuracy.
arXiv Detail & Related papers (2024-01-18T12:13:06Z) - Memory-Efficient Continual Learning Object Segmentation for Long Video [7.9190306016374485]
We propose two novel techniques to reduce the memory requirement of Online VOS methods while improving modeling accuracy and generalization on long videos.
Motivated by the success of continual learning techniques in preserving previously-learned knowledge, here we propose Gated-Regularizer Continual Learning (GRCL) and a Reconstruction-based Memory Selection Continual Learning (RMSCL)
Experimental results show that the proposed methods are able to improve the performance of Online VOS models by more than 8%, with improved robustness on long-video datasets.
arXiv Detail & Related papers (2023-09-26T21:22:03Z) - Just a Glimpse: Rethinking Temporal Information for Video Continual
Learning [58.7097258722291]
We propose a novel replay mechanism for effective video continual learning based on individual/single frames.
Under extreme memory constraints, video diversity plays a more significant role than temporal information.
Our method achieves state-of-the-art performance, outperforming the previous state-of-the-art by up to 21.49%.
arXiv Detail & Related papers (2023-05-28T19:14:25Z) - Memory Efficient Temporal & Visual Graph Model for Unsupervised Video
Domain Adaptation [50.158454960223274]
Existing video domain adaption (DA) methods need to store all temporal combinations of video frames or pair the source and target videos.
We propose a memory-efficient graph-based video DA approach.
arXiv Detail & Related papers (2022-08-13T02:56:10Z) - Per-Clip Video Object Segmentation [110.08925274049409]
Recently, memory-based approaches show promising results on semisupervised video object segmentation.
We treat video object segmentation as clip-wise mask-wise propagation.
We propose a new method tailored for the per-clip inference.
arXiv Detail & Related papers (2022-08-03T09:02:29Z) - Recurrent Dynamic Embedding for Video Object Segmentation [54.52527157232795]
We propose a Recurrent Dynamic Embedding (RDE) to build a memory bank of constant size.
We propose an unbiased guidance loss during the training stage, which makes SAM more robust in long videos.
We also design a novel self-correction strategy so that the network can repair the embeddings of masks with different qualities in the memory bank.
arXiv Detail & Related papers (2022-05-08T02:24:43Z) - Memformer: A Memory-Augmented Transformer for Sequence Modeling [55.780849185884996]
We present Memformer, an efficient neural network for sequence modeling.
Our model achieves linear time complexity and constant memory space complexity when processing long sequences.
arXiv Detail & Related papers (2020-10-14T09:03:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.