Hyperspectral Denoising Using Unsupervised Disentangled Spatio-Spectral
Deep Priors
- URL: http://arxiv.org/abs/2102.12310v1
- Date: Wed, 24 Feb 2021 14:38:51 GMT
- Title: Hyperspectral Denoising Using Unsupervised Disentangled Spatio-Spectral
Deep Priors
- Authors: Yu-Chun Miao, Xi-Le Zhao, Xiao Fu, Jian-Li Wang, and Yu-Bang Zheng
- Abstract summary: In recent years, data-driven neural network priors have shown promising performance for RGB natural image denoising.
Data-driven priors are hard to acquire for hyperspectral images due to the lack of training data.
This work puts forth an unsupervised DIP framework that is based on the classic-spectral decomposition of HSIs.
- Score: 10.65207459525818
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image denoising is often empowered by accurate prior information. In recent
years, data-driven neural network priors have shown promising performance for
RGB natural image denoising. Compared to classic handcrafted priors (e.g.,
sparsity and total variation), the "deep priors" are learned using a large
number of training samples -- which can accurately model the complex image
generating process. However, data-driven priors are hard to acquire for
hyperspectral images (HSIs) due to the lack of training data. A remedy is to
use the so-called unsupervised deep image prior (DIP). Under the unsupervised
DIP framework, it is hypothesized and empirically demonstrated that proper
neural network structures are reasonable priors of certain types of images, and
the network weights can be learned without training data. Nonetheless, the most
effective unsupervised DIP structures were proposed for natural images instead
of HSIs. The performance of unsupervised DIP-based HSI denoising is limited by
a couple of serious challenges, namely, network structure design and network
complexity. This work puts forth an unsupervised DIP framework that is based on
the classic spatio-spectral decomposition of HSIs. Utilizing the so-called
linear mixture model of HSIs, two types of unsupervised DIPs, i.e., U-Net-like
network and fully-connected networks, are employed to model the abundance maps
and endmembers contained in the HSIs, respectively. This way, empirically
validated unsupervised DIP structures for natural images can be easily
incorporated for HSI denoising. Besides, the decomposition also substantially
reduces network complexity. An efficient alternating optimization algorithm is
proposed to handle the formulated denoising problem. Semi-real and real data
experiments are employed to showcase the effectiveness of the proposed
approach.
Related papers
- Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Degradation-Noise-Aware Deep Unfolding Transformer for Hyperspectral
Image Denoising [9.119226249676501]
Hyperspectral images (HSIs) are often quite noisy because of narrow band spectral filtering.
To reduce the noise in HSI data cubes, both model-driven and learning-based denoising algorithms have been proposed.
This paper proposes a Degradation-Noise-Aware Unfolding Network (DNA-Net) that addresses these issues.
arXiv Detail & Related papers (2023-05-06T13:28:20Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Enhancing convolutional neural network generalizability via low-rank weight approximation [6.763245393373041]
Sufficient denoising is often an important first step for image processing.
Deep neural networks (DNNs) have been widely used for image denoising.
We introduce a new self-supervised framework for image denoising based on the Tucker low-rank tensor approximation.
arXiv Detail & Related papers (2022-09-26T14:11:05Z) - Unsupervised Denoising of Optical Coherence Tomography Images with
Dual_Merged CycleWGAN [3.3909577600092122]
We propose a new Cycle-Consistent Generative Adversarial Nets called Dual-Merged Cycle-WGAN for retinal OCT image denoiseing.
Our model consists of two Cycle-GAN networks with imporved generator, descriminator and wasserstein loss to achieve good training stability and better performance.
arXiv Detail & Related papers (2022-05-02T07:38:19Z) - Neighbor2Neighbor: Self-Supervised Denoising from Single Noisy Images [98.82804259905478]
We present Neighbor2Neighbor to train an effective image denoising model with only noisy images.
In detail, input and target used to train a network are images sub-sampled from the same noisy image.
A denoising network is trained on sub-sampled training pairs generated in the first stage, with a proposed regularizer as additional loss for better performance.
arXiv Detail & Related papers (2021-01-08T02:03:25Z) - SMDS-Net: Model Guided Spectral-Spatial Network for Hyperspectral Image
Denoising [10.597014770267672]
Deep learning (DL) based hyperspectral images (HSIs) denoising approaches directly learn the nonlinear mapping between observed noisy images and underlying clean images.
We introduce a novel model guided interpretable network for HSI denoising.
arXiv Detail & Related papers (2020-12-03T11:05:01Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - BP-DIP: A Backprojection based Deep Image Prior [49.375539602228415]
We propose two image restoration approaches: (i) Deep Image Prior (DIP), which trains a convolutional neural network (CNN) from scratch in test time using the degraded image; and (ii) a backprojection (BP) fidelity term, which is an alternative to the standard least squares loss that is usually used in previous DIP works.
We demonstrate the performance of the proposed method, termed BP-DIP, on the deblurring task and show its advantages over the plain DIP, with both higher PSNR values and better inference run-time.
arXiv Detail & Related papers (2020-03-11T17:09:12Z) - Towards Deep Unsupervised SAR Despeckling with Blind-Spot Convolutional
Neural Networks [30.410981386006394]
Deep learning techniques have outperformed classical model-based despeckling algorithms.
In this paper, we propose a self-supervised Bayesian despeckling method.
We show that the performance of the proposed network is very close to the supervised training approach on synthetic data and competitive on real data.
arXiv Detail & Related papers (2020-01-15T12:21:12Z) - Variational Denoising Network: Toward Blind Noise Modeling and Removal [59.36166491196973]
Blind image denoising is an important yet very challenging problem in computer vision.
We propose a new variational inference method, which integrates both noise estimation and image denoising.
arXiv Detail & Related papers (2019-08-29T15:54:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.