Low-rank Meets Sparseness: An Integrated Spatial-Spectral Total
Variation Approach to Hyperspectral Denoising
- URL: http://arxiv.org/abs/2204.12879v1
- Date: Wed, 27 Apr 2022 12:31:55 GMT
- Title: Low-rank Meets Sparseness: An Integrated Spatial-Spectral Total
Variation Approach to Hyperspectral Denoising
- Authors: Haijin Zeng, Shaoguang Huang, Yongyong Chen, Hiep Luong, and Wilfried
Philips
- Abstract summary: We propose a novel TV regularization to simultaneously characterize the sparsity and low-rank priors of the gradient map (LRSTV)
The new regularization not only imposes sparsity on the gradient map itself, but also penalizes the rank on the gradient map after Fourier transform.
It naturally encodes the sparsity and lowrank priors of the gradient map, and thus is expected to reflect the inherent structure of the original image more faithfully.
- Score: 11.79762223888294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spatial-Spectral Total Variation (SSTV) can quantify local smoothness of
image structures, so it is widely used in hyperspectral image (HSI) processing
tasks. Essentially, SSTV assumes a sparse structure of gradient maps calculated
along the spatial and spectral directions. In fact, these gradient tensors are
not only sparse, but also (approximately) low-rank under FFT, which we have
verified by numerical tests and theoretical analysis. Based on this fact, we
propose a novel TV regularization to simultaneously characterize the sparsity
and low-rank priors of the gradient map (LRSTV). The new regularization not
only imposes sparsity on the gradient map itself, but also penalize the rank on
the gradient map after Fourier transform along the spectral dimension. It
naturally encodes the sparsity and lowrank priors of the gradient map, and thus
is expected to reflect the inherent structure of the original image more
faithfully. Further, we use LRSTV to replace conventional SSTV and embed it in
the HSI processing model to improve its performance. Experimental results on
multiple public data-sets with heavy mixed noise show that the proposed model
can get 1.5dB improvement of PSNR.
Related papers
- Normalization Layer Per-Example Gradients are Sufficient to Predict Gradient Noise Scale in Transformers [2.1415873597974286]
Per-example gradient norms are a vital ingredient for estimating gradient noise scale (GNS) with minimal variance.
We propose a method with minimal FLOPs in 3D or greater tensor regimes by simultaneously computing the norms while computing the parameter gradients.
We find that the total GNS of contemporary transformer models is predicted well by the GNS of only the normalization layers.
arXiv Detail & Related papers (2024-11-01T19:50:00Z) - CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs [65.80187860906115]
We propose a novel approach to improve NeRF's performance with sparse inputs.
We first adopt a voxel-based ray sampling strategy to ensure that the sampled rays intersect with a certain voxel in 3D space.
We then randomly sample additional points within the voxel and apply a Transformer to infer the properties of other points on each ray, which are then incorporated into the volume rendering.
arXiv Detail & Related papers (2024-03-25T15:56:17Z) - Neural Gradient Regularizer [150.85797800807524]
We propose a neural gradient regularizer (NGR) that expresses the gradient map as the output of a neural network.
NGR is applicable to various image types and different image processing tasks, functioning in a zero-shot learning fashion.
arXiv Detail & Related papers (2023-08-31T10:19:23Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - Graph Spatio-Spectral Total Variation Model for Hyperspectral Image
Denoising [16.562236225580513]
We propose a new TV-type regularization called Graph-SSTV (GSSTV) for mixed noise removal.
GSSTV generates a graph explicitly reflecting the spatial structure of the target HSI from noisy HSIs and incorporates a weighted spatial difference operator based on this graph.
We demonstrate the effectiveness of GSSTV compared with existing HSI regularization models through experiments on mixed noise removal.
arXiv Detail & Related papers (2022-07-22T12:46:21Z) - Communication-Efficient Federated Learning via Quantized Compressed
Sensing [82.10695943017907]
The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server.
Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression.
We demonstrate that the framework achieves almost identical performance with the case that performs no compression.
arXiv Detail & Related papers (2021-11-30T02:13:54Z) - Channel-Directed Gradients for Optimization of Convolutional Neural
Networks [50.34913837546743]
We introduce optimization methods for convolutional neural networks that can be used to improve existing gradient-based optimization in terms of generalization error.
We show that defining the gradients along the output channel direction leads to a performance boost, while other directions can be detrimental.
arXiv Detail & Related papers (2020-08-25T00:44:09Z) - Hyperspectral Image Restoration via Global Total Variation Regularized
Local nonconvex Low-Rank matrix Approximation [1.3406858660972554]
Several bandwise total variation (TV) regularized low-rank (LR)-based models have been proposed to remove mixed noise in hyperspectral images (HSIs)
arXiv Detail & Related papers (2020-05-08T16:42:18Z) - Understanding Integrated Gradients with SmoothTaylor for Deep Neural
Network Attribution [70.78655569298923]
Integrated Gradients as an attribution method for deep neural network models offers simple implementability.
It suffers from noisiness of explanations which affects the ease of interpretability.
The SmoothGrad technique is proposed to solve the noisiness issue and smoothen the attribution maps of any gradient-based attribution method.
arXiv Detail & Related papers (2020-04-22T10:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.