DDT: Dual-branch Deformable Transformer for Image Denoising
- URL: http://arxiv.org/abs/2304.06346v1
- Date: Thu, 13 Apr 2023 08:54:44 GMT
- Title: DDT: Dual-branch Deformable Transformer for Image Denoising
- Authors: Kangliang Liu, Xiangcheng Du, Sijie Liu, Yingbin Zheng, Xingjiao Wu,
Cheng Jin
- Abstract summary: Transformer is beneficial for image denoising tasks since it can model long-range dependencies to overcome limitations presented by inductive convolutional biases.
We propose an efficient Dual-branch Deformable Transformer (DDT) denoising network which captures both local and global interactions in parallel.
- Score: 6.596462333804802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformer is beneficial for image denoising tasks since it can model
long-range dependencies to overcome the limitations presented by inductive
convolutional biases. However, directly applying the transformer structure to
remove noise is challenging because its complexity grows quadratically with the
spatial resolution. In this paper, we propose an efficient Dual-branch
Deformable Transformer (DDT) denoising network which captures both local and
global interactions in parallel. We divide features with a fixed patch size and
a fixed number of patches in local and global branches, respectively. In
addition, we apply deformable attention operation in both branches, which helps
the network focus on more important regions and further reduces computational
complexity. We conduct extensive experiments on real-world and synthetic
denoising tasks, and the proposed DDT achieves state-of-the-art performance
with significantly fewer computational costs.
Related papers
- Rotation Equivariant Arbitrary-scale Image Super-Resolution [62.41329042683779]
The arbitrary-scale image super-resolution (ASISR) aims to achieve arbitrary-scale high-resolution recoveries from a low-resolution input image.<n>We make efforts to construct a rotation equivariant ASISR method in this study.
arXiv Detail & Related papers (2025-08-07T08:51:03Z) - Attention-Only Transformers via Unrolled Subspace Denoising [19.832264029213515]
We derive a fully interpretable transformer architecture with only necessary components.<n>By unrolling such iterative denoising operations into a deep network, we arrive at a highly compact architecture.<n>Despite its simplicity, experiments on vision and language tasks demonstrate that such a transformer achieves performance close to that of standard transformer architectures.
arXiv Detail & Related papers (2025-06-04T09:53:14Z) - A TRPCA-Inspired Deep Unfolding Network for Hyperspectral Image Denoising via Thresholded t-SVD and Top-K Sparse Transformer [20.17660504535571]
We propose a novel deep unfolding network (DU-TRPCA) that enforces stage-wise alternation between two tightly integrated modules: low-rank and sparse.<n>Experiments on synthetic and real-world HSIs demonstrate that DU-TRPCA surpasses state-of-the-art methods under severe mixed noise.
arXiv Detail & Related papers (2025-06-03T02:01:39Z) - PiT: Progressive Diffusion Transformer [50.46345527963736]
We propose a series of Pseudo textbfProgressive Dtextbfiffusion textbfTransformer (textbfPiT)<n>Our proposed PiT-L achieves 54%$uparrow$ FID improvement over DiT-XL/2 while using less computation.
arXiv Detail & Related papers (2025-05-19T15:02:33Z) - RSRWKV: A Linear-Complexity 2D Attention Mechanism for Efficient Remote Sensing Vision Task [20.16344973940904]
High-resolution remote sensing analysis faces challenges due to scene complexity and scale diversity.
We propose RSRWKV, featuring a novel 2D-WKV scanning mechanism that bridges sequential processing and 2D spatial reasoning.
arXiv Detail & Related papers (2025-03-26T10:03:46Z) - FUSE: Label-Free Image-Event Joint Monocular Depth Estimation via Frequency-Decoupled Alignment and Degradation-Robust Fusion [63.87313550399871]
Image-event joint depth estimation methods leverage complementary modalities for robust perception, yet face challenges in generalizability.
We propose Self-supervised Transfer (PST) and FrequencyDe-coupled Fusion module (FreDF)
PST establishes cross-modal knowledge transfer through latent space alignment with image foundation models.
FreDF explicitly decouples high-frequency edge features from low-frequency structural components, resolving modality-specific frequency mismatches.
arXiv Detail & Related papers (2025-03-25T15:04:53Z) - Task-Aware Dynamic Transformer for Efficient Arbitrary-Scale Image Super-Resolution [8.78015409192613]
Arbitrary-scale super-resolution (ASSR) aims to learn a single model for image super-resolution at arbitrary magnifying scales.
Existing ASSR networks typically comprise an off-the-shelf scale-agnostic feature extractor and an arbitrary scale upsampler.
We propose a Task-Aware Dynamic Transformer (TADT) as an input-adaptive feature extractor for efficient image ASSR.
arXiv Detail & Related papers (2024-08-16T13:35:52Z) - Efficient Diffusion Transformer with Step-wise Dynamic Attention Mediators [83.48423407316713]
We present a novel diffusion transformer framework incorporating an additional set of mediator tokens to engage with queries and keys separately.
Our model initiates the denoising process with a precise, non-ambiguous stage and gradually transitions to a phase enriched with detail.
Our method achieves a state-of-the-art FID score of 2.01 when integrated with the recent work SiT.
arXiv Detail & Related papers (2024-08-11T07:01:39Z) - Efficient Multi-disparity Transformer for Light Field Image Super-resolution [6.814658355110824]
This paper presents the Multi-scale Disparity Transformer (MDT), a novel Transformer tailored for light field image super-resolution (LFSR)
MDT addresses the issues of computational redundancy and disparity entanglement caused by the indiscriminate processing of sub-aperture images.
Building on this architecture, we present LF-MDTNet, an efficient LFSR network.
arXiv Detail & Related papers (2024-07-22T02:23:09Z) - Adaptive Step-size Perception Unfolding Network with Non-local Hybrid Attention for Hyperspectral Image Reconstruction [0.39134031118910273]
We propose an adaptive step-size perception unfolding network (ASPUN), a deep unfolding network based on FISTA algorithm.
In addition, we design a Non-local Hybrid Attention Transformer(NHAT) module for fully leveraging the receptive field advantage of transformer.
Experimental results show that our ASPUN is superior to the existing SOTA algorithms and achieves the best performance.
arXiv Detail & Related papers (2024-07-04T16:09:52Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Mutual Information-driven Triple Interaction Network for Efficient Image
Dehazing [54.168567276280505]
We propose a novel Mutual Information-driven Triple interaction Network (MITNet) for image dehazing.
The first stage, named amplitude-guided haze removal, aims to recover the amplitude spectrum of the hazy images for haze removal.
The second stage, named phase-guided structure refined, devotes to learning the transformation and refinement of the phase spectrum.
arXiv Detail & Related papers (2023-08-14T08:23:58Z) - Short Range Correlation Transformer for Occluded Person
Re-Identification [4.339510167603376]
We propose a partial feature transformer-based person re-identification framework named PFT.
The proposed PFT utilizes three modules to enhance the efficiency of vision transformer.
Experimental results over occluded and holistic re-identification datasets demonstrate that the proposed PFT network achieves superior performance consistently.
arXiv Detail & Related papers (2022-01-04T11:12:39Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Adaptive Fourier Neural Operators: Efficient Token Mixers for
Transformers [55.90468016961356]
We propose an efficient token mixer that learns to mix in the Fourier domain.
AFNO is based on a principled foundation of operator learning.
It can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms.
arXiv Detail & Related papers (2021-11-24T05:44:31Z) - Dispensed Transformer Network for Unsupervised Domain Adaptation [21.256375606219073]
A novel unsupervised domain adaptation (UDA) method named dispensed Transformer network (DTNet) is introduced in this paper.
Our proposed network achieves the best performance in comparison with several state-of-the-art techniques.
arXiv Detail & Related papers (2021-10-28T08:27:44Z) - Improved Transformer for High-Resolution GANs [69.42469272015481]
We introduce two key ingredients to Transformer to address this challenge.
We show in the experiments that the proposed HiT achieves state-of-the-art FID scores of 31.87 and 2.95 on unconditional ImageNet $128 times 128$ and FFHQ $256 times 256$, respectively.
arXiv Detail & Related papers (2021-06-14T17:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.