Convolutional Autoencoder for Blind Hyperspectral Image Unmixing
- URL: http://arxiv.org/abs/2011.09420v1
- Date: Wed, 18 Nov 2020 17:41:31 GMT
- Title: Convolutional Autoencoder for Blind Hyperspectral Image Unmixing
- Authors: Yasiru Ranasinghe, Sanjaya Herath, Kavinga Weerasooriya, Mevan
Ekanayake, Roshan Godaliyadda, Parakrama Ekanayake, Vijitha Herath
- Abstract summary: spectral unmixing is a technique to decompose a mixed pixel into two fundamental representatives: endmembers and abundances.
In this paper, a novel architecture is proposed to perform blind unmixing on hyperspectral images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the remote sensing context spectral unmixing is a technique to decompose a
mixed pixel into two fundamental representatives: endmembers and abundances. In
this paper, a novel architecture is proposed to perform blind unmixing on
hyperspectral images. The proposed architecture consists of convolutional
layers followed by an autoencoder. The encoder transforms the feature space
produced through convolutional layers to a latent space representation. Then,
from these latent characteristics the decoder reconstructs the roll-out image
of the monochrome image which is at the input of the architecture; and each
single-band image is fed sequentially. Experimental results on real
hyperspectral data concludes that the proposed algorithm outperforms existing
unmixing methods at abundance estimation and generates competitive results for
endmember extraction with RMSE and SAD as the metrics, respectively.
Related papers
- Efficient Progressive Image Compression with Variance-aware Masking [13.322199338779237]
We propose a progressive image compression method in which an image is first represented as a pair of base-quality and top-quality latent representations.
A residual latent representation is encoded as the element-wise difference between the top and base representations.
We obtain results competitive with state-of-the-art competitors, while significantly reducing computational complexity, decoding time, and number of parameters.
arXiv Detail & Related papers (2024-11-15T13:34:46Z) - Improving Diffusion-Based Image Synthesis with Context Prediction [49.186366441954846]
Existing diffusion models mainly try to reconstruct input image from a corrupted one with a pixel-wise or feature-wise constraint along spatial axes.
We propose ConPreDiff to improve diffusion-based image synthesis with context prediction.
Our ConPreDiff consistently outperforms previous methods and achieves a new SOTA text-to-image generation results on MS-COCO, with a zero-shot FID score of 6.21.
arXiv Detail & Related papers (2024-01-04T01:10:56Z) - Aperture Diffraction for Compact Snapshot Spectral Imaging [27.321750056840706]
We demonstrate a compact, cost-effective snapshot spectral imaging system named Aperture Diffraction Imaging Spectrometer (ADIS)
A new optical design that each point in the object space is multiplexed to discrete encoding locations on the mosaic filter sensor is introduced.
The Cascade Shift-Shuffle Spectral Transformer (CSST) with strong perception of the diffraction degeneration is designed to solve a sparsity-constrained inverse problem.
arXiv Detail & Related papers (2023-09-27T16:48:46Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Rank-Enhanced Low-Dimensional Convolution Set for Hyperspectral Image
Denoising [50.039949798156826]
This paper tackles the challenging problem of hyperspectral (HS) image denoising.
We propose rank-enhanced low-dimensional convolution set (Re-ConvSet)
We then incorporate Re-ConvSet into the widely-used U-Net architecture to construct an HS image denoising method.
arXiv Detail & Related papers (2022-07-09T13:35:12Z) - Semantic Image Synthesis via Diffusion Models [159.4285444680301]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.
Recent work on semantic image synthesis mainly follows the emphde facto Generative Adversarial Nets (GANs)
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - LADMM-Net: An Unrolled Deep Network For Spectral Image Fusion From
Compressive Data [6.230751621285322]
Hyperspectral (HS) and multispectral (MS) image fusion aims at estimating a high-resolution spectral image from a low-spatial-resolution HS image and a low-spectral-resolution MS image.
In this work, a deep learning architecture under the algorithm unrolling approach is proposed for solving the fusion problem from HS and MS compressive measurements.
arXiv Detail & Related papers (2021-03-01T12:04:42Z) - End-to-End JPEG Decoding and Artifacts Suppression Using Heterogeneous
Residual Convolutional Neural Network [0.0]
Existing deep learning models separate JPEG artifacts suppression from the decoding protocol as independent task.
We take one step forward to design a true end-to-end heterogeneous residual convolutional neural network (HR-CNN) with spectrum decomposition and heterogeneous reconstruction mechanism.
arXiv Detail & Related papers (2020-07-01T17:44:00Z) - Kullback-Leibler Divergence-Based Fuzzy $C$-Means Clustering
Incorporating Morphological Reconstruction and Wavelet Frames for Image
Segmentation [152.609322951917]
We come up with a Kullback-Leibler (KL) divergence-based Fuzzy C-Means (FCM) algorithm by incorporating a tight wavelet frame transform and a morphological reconstruction operation.
The proposed algorithm works well and comes with better segmentation performance than other comparative algorithms.
arXiv Detail & Related papers (2020-02-21T05:19:10Z) - Residual-Sparse Fuzzy $C$-Means Clustering Incorporating Morphological
Reconstruction and Wavelet frames [146.63177174491082]
Fuzzy $C$-Means (FCM) algorithm incorporates a morphological reconstruction operation and a tight wavelet frame transform.
We present an improved FCM algorithm by imposing an $ell_0$ regularization term on the residual between the feature set and its ideal value.
Experimental results reported for synthetic, medical, and color images show that the proposed algorithm is effective and efficient, and outperforms other algorithms.
arXiv Detail & Related papers (2020-02-14T10:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.