MP-HSIR: A Multi-Prompt Framework for Universal Hyperspectral Image Restoration
- URL: http://arxiv.org/abs/2503.09131v1
- Date: Wed, 12 Mar 2025 07:40:49 GMT
- Title: MP-HSIR: A Multi-Prompt Framework for Universal Hyperspectral Image Restoration
- Authors: Zhehui Wu, Yong Chen, Naoto Yokoya, Wei He,
- Abstract summary: Hyperspectral images (HSIs) often suffer from diverse and unknown degradations during imaging.<n>Existing HSI restoration methods rely on specific degradation assumptions, limiting their effectiveness in complex scenarios.<n>We propose MP-HSIR, a novel multi-prompt framework that effectively integrates spectral, textual, and visual prompts to achieve universal HSI restoration.
- Score: 15.501904258858112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral images (HSIs) often suffer from diverse and unknown degradations during imaging, leading to severe spectral and spatial distortions. Existing HSI restoration methods typically rely on specific degradation assumptions, limiting their effectiveness in complex scenarios. In this paper, we propose MP-HSIR, a novel multi-prompt framework that effectively integrates spectral, textual, and visual prompts to achieve universal HSI restoration across diverse degradation types and intensities. Specifically, we develop a prompt-guided spatial-spectral transformer, which incorporates spatial self-attention and a prompt-guided dual-branch spectral self-attention. Since degradations affect spectral features differently, we introduce spectral prompts in the local spectral branch to provide universal low-rank spectral patterns as prior knowledge for enhancing spectral reconstruction. Furthermore, the text-visual synergistic prompt fuses high-level semantic representations with fine-grained visual features to encode degradation information, thereby guiding the restoration process. Extensive experiments on 9 HSI restoration tasks, including all-in-one scenarios, generalization tests, and real-world cases, demonstrate that MP-HSIR not only consistently outperforms existing all-in-one methods but also surpasses state-of-the-art task-specific approaches across multiple tasks. The code and models will be released at https://github.com/ZhehuiWu/MP-HSIR.
Related papers
- Mixed-granularity Implicit Representation for Continuous Hyperspectral Compressive Reconstruction [16.975538181162616]
This study introduces a novel method using implicit neural representation for continuous hyperspectral image reconstruction.
By leveraging implicit neural representations, the MGIR framework enables reconstruction at any desired spatial-spectral resolution.
arXiv Detail & Related papers (2025-03-17T03:37:42Z) - UniUIR: Considering Underwater Image Restoration as An All-in-One Learner [49.35128836844725]
We propose a Universal Underwater Image Restoration method, termed as UniUIR.<n>To decouple degradation-specific issues and explore the inter-correlations among various degradations in UIR task, we designed the Mamba Mixture-of-Experts module.<n>This module extracts degradation prior information in both spatial and frequency domains, and adaptively selects the most appropriate task-specific prompts.
arXiv Detail & Related papers (2025-01-22T16:10:42Z) - Unleashing Correlation and Continuity for Hyperspectral Reconstruction from RGB Images [64.80875911446937]
We propose a Correlation and Continuity Network (CCNet) for HSI reconstruction from RGB images.<n>For the correlation of local spectrum, we introduce the Group-wise Spectral Correlation Modeling (GrSCM) module.<n>For the continuity of global spectrum, we design the Neighborhood-wise Spectral Continuity Modeling (NeSCM) module.
arXiv Detail & Related papers (2025-01-02T15:14:40Z) - PromptHSI: Universal Hyperspectral Image Restoration with Vision-Language Modulated Frequency Adaptation [28.105125164852367]
We propose PromptHSI, the first universal AiO HSI restoration framework.<n>Our approach decomposes text prompts into intensity and bias controllers that effectively guide the restoration process.<n>Our architecture excels at both fine-grained recovery and global information restoration across diverse degradation scenarios.
arXiv Detail & Related papers (2024-11-24T17:08:58Z) - TOP-ReID: Multi-spectral Object Re-Identification with Token Permutation [64.65950381870742]
We propose a cyclic token permutation framework for multi-spectral object ReID, dubbled TOP-ReID.
We also propose a Token Permutation Module (TPM) for cyclic multi-spectral feature aggregation.
Our proposed framework can generate more discriminative multi-spectral features for robust object ReID.
arXiv Detail & Related papers (2023-12-15T08:54:15Z) - Gated Multi-Resolution Transfer Network for Burst Restoration and
Enhancement [75.25451566988565]
We propose a novel Gated Multi-Resolution Transfer Network (GMTNet) to reconstruct a spatially precise high-quality image from a burst of low-quality raw images.
Detailed experimental analysis on five datasets validates our approach and sets a state-of-the-art for burst super-resolution, burst denoising, and low-light burst enhancement.
arXiv Detail & Related papers (2023-04-13T17:54:00Z) - MST++: Multi-stage Spectral-wise Transformer for Efficient Spectral
Reconstruction [148.26195175240923]
We propose a novel Transformer-based method, Multi-stage Spectral-wise Transformer (MST++) for efficient spectral reconstruction.
In the NTIRE 2022 Spectral Reconstruction Challenge, our approach won the First place.
arXiv Detail & Related papers (2022-04-17T02:39:32Z) - Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image
Reconstruction [127.20208645280438]
Hyperspectral image (HSI) reconstruction aims to recover the 3D spatial-spectral signal from a 2D measurement.
Modeling the inter-spectra interactions is beneficial for HSI reconstruction.
Mask-guided Spectral-wise Transformer (MST) proposes a novel framework for HSI reconstruction.
arXiv Detail & Related papers (2021-11-15T16:59:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.