Multi-modal and frequency-weighted tensor nuclear norm for hyperspectral
image denoising
- URL: http://arxiv.org/abs/2106.12489v1
- Date: Wed, 23 Jun 2021 16:01:08 GMT
- Title: Multi-modal and frequency-weighted tensor nuclear norm for hyperspectral
image denoising
- Authors: Sheng Liu, Xiaozhen Xie, Wenfeng Kong, and Jifeng Ning
- Abstract summary: Low-rankness is important in the hyperspectral image (HSI) denoising tasks.
We propose the multi-modal and the non-weighted tensor suboptimal normising performance (MFWTNN) and the non MFWTNN for HSI denoising tasks.
- Score: 2.9993889271808807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low-rankness is important in the hyperspectral image (HSI) denoising tasks.
The tensor nuclear norm (TNN), defined based on the tensor singular value
decomposition, is a state-of-the-art method to describe the low-rankness of
HSI. However, TNN ignores some of the physical meanings of HSI in tackling the
denoising tasks, leading to suboptimal denoising performance. In this paper, we
propose the multi-modal and frequency-weighted tensor nuclear norm (MFWTNN) and
the non-convex MFWTNN for HSI denoising tasks. Firstly, we investigate the
physical meaning of frequency components and reconsider their weights to
improve the low-rank representation ability of TNN. Meanwhile, we also consider
the correlation among two spatial dimensions and the spectral dimension of HSI
and combine the above improvements to TNN to propose MFWTNN. Secondly, we use
non-convex functions to approximate the rank function of the frequency tensor
and propose the NonMFWTNN to relax the MFWTNN better. Besides, we adaptively
choose bigger weights for slices mainly containing noise information and
smaller weights for slices containing profile information. Finally, we develop
the efficient alternating direction method of multiplier (ADMM) based algorithm
to solve the proposed models, and the effectiveness of our models are
substantiated in simulated and real HSI datasets.
Related papers
- Hybrid Convolutional and Attention Network for Hyperspectral Image Denoising [54.110544509099526]
Hyperspectral image (HSI) denoising is critical for the effective analysis and interpretation of hyperspectral data.
We propose a hybrid convolution and attention network (HCANet) to enhance HSI denoising.
Experimental results on mainstream HSI datasets demonstrate the rationality and effectiveness of the proposed HCANet.
arXiv Detail & Related papers (2024-03-15T07:18:43Z) - Hyperspectral Image Fusion via Logarithmic Low-rank Tensor Ring
Decomposition [26.76968345244154]
We study the low-rankness of TR factors from the TNN perspective and consider the mode-2 logarithmic TNN (LTNN) on each TR factor.
A novel fusion model is proposed by incorporating this LTNN regularization and the weighted total variation.
arXiv Detail & Related papers (2023-10-16T04:02:34Z) - Hyperspectral Image Denoising via Self-Modulating Convolutional Neural
Networks [15.700048595212051]
We introduce a self-modulating convolutional neural network which utilizes correlated spectral and spatial information.
At the core of the model lies a novel block, which allows the network to transform the features in an adaptive manner based on the adjacent spectral data.
Experimental analysis on both synthetic and real data shows that the proposed SM-CNN outperforms other state-of-the-art HSI denoising methods.
arXiv Detail & Related papers (2023-09-15T06:57:43Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Towards Robust k-Nearest-Neighbor Machine Translation [72.9252395037097]
k-Nearest-Neighbor Machine Translation (kNN-MT) becomes an important research direction of NMT in recent years.
Its main idea is to retrieve useful key-value pairs from an additional datastore to modify translations without updating the NMT model.
The underlying retrieved noisy pairs will dramatically deteriorate the model performance.
We propose a confidence-enhanced kNN-MT model with robust training to alleviate the impact of noise.
arXiv Detail & Related papers (2022-10-17T07:43:39Z) - MemSE: Fast MSE Prediction for Noisy Memristor-Based DNN Accelerators [5.553959304125023]
We theoretically analyze the mean squared error of DNNs that use memristors to compute matrix-vector multiplications (MVM)
We take into account both the quantization noise, due to the necessity of reducing the DNN model size, and the programming noise, stemming from the variability during the programming of the memristance value.
The proposed method is almost two order of magnitude faster than Monte-Carlo simulation, thus making it possible to optimize the implementation parameters to achieve minimal error for a given power constraint.
arXiv Detail & Related papers (2022-05-03T18:10:43Z) - Hyperspectral Image Restoration via Multi-mode and Double-weighted
Tensor Nuclear Norm Minimization [2.4965977185977732]
nuclear norm (TNN) induced by tensor singular value decomposition plays an important role in hyperspectral image (HSI) restoration tasks.
We propose a multi-mode and double-weighted TNN based on the above three crucial phenomenons.
It can adaptively shrink the frequency components and singular values according to their physical meanings in all modes of HSIs.
arXiv Detail & Related papers (2021-01-19T15:20:38Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z) - A Fully Tensorized Recurrent Neural Network [48.50376453324581]
We introduce a "fully tensorized" RNN architecture which jointly encodes the separate weight matrices within each recurrent cell.
This approach reduces model size by several orders of magnitude, while still maintaining similar or better performance compared to standard RNNs.
arXiv Detail & Related papers (2020-10-08T18:24:12Z) - Enhancement of a CNN-Based Denoiser Based on Spatial and Spectral
Analysis [23.11994688706024]
We propose a discrete wavelet denoising CNN (WDnCNN) which restores images corrupted by various noise with a single model.
To address this issue, we present a band normalization module (BNM) to normalize the coefficients from different parts of the frequency spectrum.
We evaluate the proposed WDnCNN, and compare it with other state-of-the-art denoisers.
arXiv Detail & Related papers (2020-06-28T05:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.