Pixel Adaptive Deep Unfolding Transformer for Hyperspectral Image
Reconstruction
- URL: http://arxiv.org/abs/2308.10820v2
- Date: Fri, 22 Sep 2023 13:20:22 GMT
- Title: Pixel Adaptive Deep Unfolding Transformer for Hyperspectral Image
Reconstruction
- Authors: Miaoyu Li, Ying Fu, Ji Liu, Yulun Zhang
- Abstract summary: We propose a Pixel Adaptive Deep Unfolding Transformer (PADUT) for HSI reconstruction.
In the data module, a pixel adaptive descent step is employed to focus on pixel-level degradation.
In the prior module, we introduce the Non-local Spectral Transformer (NST) to emphasize the 3D characteristics of HSI for recovering.
- Score: 58.32266851510948
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral Image (HSI) reconstruction has made gratifying progress with
the deep unfolding framework by formulating the problem into a data module and
a prior module. Nevertheless, existing methods still face the problem of
insufficient matching with HSI data. The issues lie in three aspects: 1) fixed
gradient descent step in the data module while the degradation of HSI is
agnostic in the pixel-level. 2) inadequate prior module for 3D HSI cube. 3)
stage interaction ignoring the differences in features at different stages. To
address these issues, in this work, we propose a Pixel Adaptive Deep Unfolding
Transformer (PADUT) for HSI reconstruction. In the data module, a pixel
adaptive descent step is employed to focus on pixel-level agnostic degradation.
In the prior module, we introduce the Non-local Spectral Transformer (NST) to
emphasize the 3D characteristics of HSI for recovering. Moreover, inspired by
the diverse expression of features in different stages and depths, the stage
interaction is improved by the Fast Fourier Transform (FFT). Experimental
results on both simulated and real scenes exhibit the superior performance of
our method compared to state-of-the-art HSI reconstruction methods. The code is
released at: https://github.com/MyuLi/PADUT.
Related papers
- Look-Around Before You Leap: High-Frequency Injected Transformer for Image Restoration [46.96362010335177]
In this paper, we propose HIT, a simple yet effective High-frequency Injected Transformer for image restoration.
Specifically, we design a window-wise injection module (WIM), which incorporates abundant high-frequency details into the feature map, to provide reliable references for restoring high-quality images.
In addition, we introduce a spatial enhancement unit (SEU) to preserve essential spatial relationships that may be lost due to the computations carried out across channel dimensions in the BIM.
arXiv Detail & Related papers (2024-03-30T08:05:00Z) - SD-MVS: Segmentation-Driven Deformation Multi-View Stereo with Spherical
Refinement and EM optimization [6.886220026399106]
We introduce Multi-View Stereo (SD-MVS) to tackle challenges in 3D reconstruction of textureless areas.
We are the first to adopt the Segment Anything Model (SAM) to distinguish semantic instances in scenes.
We propose a unique refinement strategy that combines spherical coordinates and gradient descent on normals and pixelwise search interval on depths.
arXiv Detail & Related papers (2024-01-12T05:25:57Z) - Tuning-free Plug-and-Play Hyperspectral Image Deconvolution with Deep
Priors [6.0622962428871885]
We introduce a tuning-free Plug-and-Play (Play) algorithm for HSI deconvolution.
Specifically, we use the alternating direction multipliers (ADMM) to decompose the problem into two iterative sub-problems.
A flexible blind 3D denoising network (B3DDN) is designed to learn deep priors and to solve the denoising sub-problem with different noise levels.
arXiv Detail & Related papers (2022-11-28T13:41:14Z) - Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral
Compressive Imaging [142.11622043078867]
We propose a principled Degradation-Aware Unfolding Framework (DAUF) that estimates parameters from the compressed image and physical mask, and then uses these parameters to control each iteration.
By plugging HST into DAUF, we establish the first Transformer-based deep unfolding method, Degradation-Aware Unfolding Half-Shuffle Transformer (DAUHST) for HSI reconstruction.
arXiv Detail & Related papers (2022-05-20T11:37:44Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z) - Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction [138.04956118993934]
We propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST)
CST embedding HSI sparsity into deep learning for HSI reconstruction.
In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing.
arXiv Detail & Related papers (2022-03-09T16:17:47Z) - Learning A 3D-CNN and Transformer Prior for Hyperspectral Image
Super-Resolution [80.93870349019332]
We propose a novel HSISR method that uses Transformer instead of CNN to learn the prior of HSIs.
Specifically, we first use the gradient algorithm to solve the HSISR model, and then use an unfolding network to simulate the iterative solution processes.
arXiv Detail & Related papers (2021-11-27T15:38:57Z) - Hyperspectral Pansharpening Based on Improved Deep Image Prior and
Residual Reconstruction [64.10636296274168]
Hyperspectral pansharpening aims to synthesize a low-resolution hyperspectral image (LR-HSI) with a registered panchromatic image (PAN) to generate an enhanced HSI with high spectral and spatial resolution.
Recently proposed HS pansharpening methods have obtained remarkable results using deep convolutional networks (ConvNets)
We propose a novel over-complete network, called HyperKite, which focuses on learning high-level features by constraining the receptive from increasing in the deep layers.
arXiv Detail & Related papers (2021-07-06T14:11:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.