Improving Image De-raining Using Reference-Guided Transformers
- URL: http://arxiv.org/abs/2408.00258v1
- Date: Thu, 1 Aug 2024 03:31:45 GMT
- Title: Improving Image De-raining Using Reference-Guided Transformers
- Authors: Zihao Ye, Jaehoon Cho, Changjae Oh,
- Abstract summary: We present a reference-guided de-raining filter, a transformer network that enhances de-raining results using a reference clean image as guidance.
We validate our method on three datasets and show that our module can improve the performance of existing prior-based, CNN-based, and transformer-based approaches.
- Score: 9.867364371892693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image de-raining is a critical task in computer vision to improve visibility and enhance the robustness of outdoor vision systems. While recent advances in de-raining methods have achieved remarkable performance, the challenge remains to produce high-quality and visually pleasing de-rained results. In this paper, we present a reference-guided de-raining filter, a transformer network that enhances de-raining results using a reference clean image as guidance. We leverage the capabilities of the proposed module to further refine the images de-rained by existing methods. We validate our method on three datasets and show that our module can improve the performance of existing prior-based, CNN-based, and transformer-based approaches.
Related papers
- Adaptive Frequency Enhancement Network for Single Image Deraining [10.64622976628013]
We introduce a novel end-to-end Adaptive Frequency Enhancement Network (AFENet) specifically for single image deraining.
We employ convolutions of different scales to adaptively decompose image frequency bands, introduce a feature enhancement module, and present a novel interaction module.
This approach empowers the deraining network to eliminate diverse and complex rainy patterns and to reconstruct image details accurately.
arXiv Detail & Related papers (2024-07-19T13:24:05Z) - RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled Neural Rendering [50.14860376758962]
We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images.
Based on the spectral bias property of neural networks, we first optimize the neural rendering pipeline to obtain a low-frequency scene representation.
We jointly optimize the two modules, driven by the proposed adaptive direction-sensitive gradient-based reconstruction loss.
arXiv Detail & Related papers (2024-04-17T14:07:22Z) - Gabor-guided transformer for single image deraining [2.330361251490783]
We propose a Gabor-guided tranformer (Gabformer) for single image deraining.
The focus on local texture features is enhanced by incorporating the information processed by the Gabor filter into the query vector.
Our method outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-12T07:41:51Z) - Contrastive Learning Based Recursive Dynamic Multi-Scale Network for
Image Deraining [47.764883957379745]
Rain streaks significantly decrease the visibility of captured images.
Existing deep learning-based image deraining methods employ manually crafted networks and learn a straightforward projection from rainy images to clear images.
We propose a contrastive learning-based image deraining method that investigates the correlation between rainy and clear images.
arXiv Detail & Related papers (2023-05-29T13:51:41Z) - Single Image Deraining via Feature-based Deep Convolutional Neural
Network [13.39233717329633]
A single image deraining algorithm based on the combination of data-driven and model-based approaches is proposed.
Experiments show that the proposed algorithm significantly outperforms state-of-the-art methods in terms of both qualitative and quantitative measures.
arXiv Detail & Related papers (2023-05-03T13:12:51Z) - Activating More Pixels in Image Super-Resolution Transformer [53.87533738125943]
Transformer-based methods have shown impressive performance in low-level vision tasks, such as image super-resolution.
We propose a novel Hybrid Attention Transformer (HAT) to activate more input pixels for better reconstruction.
Our overall method significantly outperforms the state-of-the-art methods by more than 1dB.
arXiv Detail & Related papers (2022-05-09T17:36:58Z) - AdaViT: Adaptive Vision Transformers for Efficient Image Recognition [78.07924262215181]
We introduce AdaViT, an adaptive framework that learns to derive usage policies on which patches, self-attention heads and transformer blocks to use.
Our method obtains more than 2x improvement on efficiency compared to state-of-the-art vision transformers with only 0.8% drop of accuracy.
arXiv Detail & Related papers (2021-11-30T18:57:02Z) - Adversarial Domain Feature Adaptation for Bronchoscopic Depth Estimation [111.89519571205778]
In this work, we propose an alternative domain-adaptive approach to depth estimation.
Our novel two-step structure first trains a depth estimation network with labeled synthetic images in a supervised manner.
The results of our experiments show that the proposed method improves the network's performance on real images by a considerable margin.
arXiv Detail & Related papers (2021-09-24T08:11:34Z) - Enhancing Photorealism Enhancement [83.88433283714461]
We present an approach to enhancing the realism of synthetic images using a convolutional network.
We analyze scene layout distributions in commonly used datasets and find that they differ in important ways.
We report substantial gains in stability and realism in comparison to recent image-to-image translation methods.
arXiv Detail & Related papers (2021-05-10T19:00:49Z) - Learned Camera Gain and Exposure Control for Improved Visual Feature
Detection and Matching [12.870196901446208]
We explore a data-driven approach to account for environmental lighting changes, improving the quality of images for use in visual odometry (VO) or visual simultaneous localization and mapping (SLAM)
We train a deep convolutional neural network model to predictively adjust camera gain and exposure time parameters.
We demonstrate through extensive real-world experiments that our network can anticipate and compensate for dramatic lighting changes.
arXiv Detail & Related papers (2021-02-08T16:46:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.