GAUSS: Guided Encoder-Decoder Architecture for Hyperspectral Unmixing
with Spatial Smoothness
- URL: http://arxiv.org/abs/2204.07713v1
- Date: Sat, 16 Apr 2022 04:23:47 GMT
- Title: GAUSS: Guided Encoder-Decoder Architecture for Hyperspectral Unmixing
with Spatial Smoothness
- Authors: Yasiru Ranasinghe, Kavinga Weerasooriya, Roshan Godaliyadda, Vijitha
Herath, Parakrama Ekanayake, Dhananjaya Jayasundara, Lakshitha Ramanayake,
Neranjan Senarath and Dulantha Wickramasinghe
- Abstract summary: In recent hyperspectral unmixing (HU) literature, the application of deep learning (DL) has become more prominent.
We propose a split architecture and use a pseudo-ground truth for abundances to guide the unmixing network' (UN) optimization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent hyperspectral unmixing (HU) literature, the application of deep
learning (DL) has become more prominent, especially with the autoencoder (AE)
architecture. We propose a split architecture and use a pseudo-ground truth for
abundances to guide the `unmixing network' (UN) optimization. Preceding the UN,
an `approximation network' (AN) is proposed, which will improve the association
between the centre pixel and its neighbourhood. Hence, it will accentuate
spatial correlation in the abundances as its output is the input to the UN and
the reference for the `mixing network' (MN). In the Guided Encoder-Decoder
Architecture for Hyperspectral Unmixing with Spatial Smoothness (GAUSS), we
proposed using one-hot encoded abundances as the pseudo-ground truth to guide
the UN; computed using the k-means algorithm to exclude the use of prior HU
methods. Furthermore, we release the single-layer constraint on MN by
introducing the UN generated abundances in contrast to the standard AE for HU.
Secondly, we experimented with two modifications on the pre-trained network
using the GAUSS method. In GAUSS$_\textit{blind}$, we have concatenated the UN
and the MN to back-propagate the reconstruction error gradients to the encoder.
Then, in the GAUSS$_\textit{prime}$, abundance results of a signal processing
(SP) method with reliable abundance results were used as the pseudo-ground
truth with the GAUSS architecture. According to quantitative and graphical
results for four experimental datasets, the three architectures either
transcended or equated the performance of existing HU algorithms from both DL
and SP domains.
Related papers
- Introducing a microstructure-embedded autoencoder approach for reconstructing high-resolution solution field data from a reduced parametric space [0.0]
We develop a novel multi-fidelity deep learning approach that transforms low-fidelity solution maps into high-fidelity ones by incorporating parametric space information into a standard autoencoder architecture.
This method's integration of parametric space information significantly reduces the need for training data to effectively predict high-fidelity solutions from low-fidelity ones.
arXiv Detail & Related papers (2024-05-03T10:00:36Z) - Information-Theoretic Generalization Bounds for Deep Neural Networks [22.87479366196215]
Deep neural networks (DNNs) exhibit an exceptional capacity for generalization in practical applications.
This work aims to capture the effect and benefits of depth for supervised learning via information-theoretic generalization bounds.
arXiv Detail & Related papers (2024-04-04T03:20:35Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - CMG-Net: Robust Normal Estimation for Point Clouds via Chamfer Normal
Distance and Multi-scale Geometry [23.86650228464599]
This work presents an accurate and robust method for estimating normals from point clouds.
We first propose a new metric termed Chamfer Normal Distance to address this issue.
We devise an innovative architecture that encompasses Multi-scale Local Feature Aggregation and Hierarchical Geometric Information Fusion.
arXiv Detail & Related papers (2023-12-14T17:23:16Z) - Sparse-Inductive Generative Adversarial Hashing for Nearest Neighbor
Search [8.020530603813416]
We propose a novel unsupervised hashing method, termed Sparsity-Induced Generative Adversarial Hashing (SiGAH)
SiGAH encodes large-scale high-scale high-dimensional features into binary codes, which solves the two problems through a generative adversarial training framework.
Experimental results on four benchmarks, i.e. Tiny100K, GIST1M, Deep1M, and MNIST, have shown that the proposed SiGAH has superior performance over state-of-the-art approaches.
arXiv Detail & Related papers (2023-06-12T08:07:23Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Momentum Contrastive Autoencoder: Using Contrastive Learning for Latent
Space Distribution Matching in WAE [51.09507030387935]
Wasserstein autoencoder (WAE) shows that matching two distributions is equivalent to minimizing a simple autoencoder (AE) loss under the constraint that the latent space of this AE matches a pre-specified prior distribution.
We propose to use the contrastive learning framework that has been shown to be effective for self-supervised representation learning, as a means to resolve this problem.
We show that using the contrastive learning framework to optimize the WAE loss achieves faster convergence and more stable optimization compared with existing popular algorithms for WAE.
arXiv Detail & Related papers (2021-10-19T22:55:47Z) - Spatial Dependency Networks: Neural Layers for Improved Generative Image
Modeling [79.15521784128102]
We introduce a novel neural network for building image generators (decoders) and apply it to variational autoencoders (VAEs)
In our spatial dependency networks (SDNs), feature maps at each level of a deep neural net are computed in a spatially coherent way.
We show that augmenting the decoder of a hierarchical VAE by spatial dependency layers considerably improves density estimation.
arXiv Detail & Related papers (2021-03-16T07:01:08Z) - Recent Developments Combining Ensemble Smoother and Deep Generative
Networks for Facies History Matching [58.720142291102135]
This research project focuses on the use of autoencoders networks to construct a continuous parameterization for facies models.
We benchmark seven different formulations, including VAE, generative adversarial network (GAN), Wasserstein GAN, variational auto-encoding GAN, principal component analysis (PCA) with cycle GAN, PCA with transfer style network and VAE with style loss.
arXiv Detail & Related papers (2020-05-08T21:32:42Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - TopologyGAN: Topology Optimization Using Generative Adversarial Networks
Based on Physical Fields Over the Initial Domain [2.0263791972068628]
We propose a new data-driven topology optimization model called TopologyGAN.
TopologyGAN takes advantage of various physical fields computed on the original, unoptimized material domain, as inputs to the generator of a conditional generative adversarial network (cGAN)
Compared to a baseline cGAN, TopologyGAN achieves a nearly $3times$ reduction in the mean squared error and a $2.5times$ reduction in the mean absolute error on test problems involving previously unseen boundary conditions.
arXiv Detail & Related papers (2020-03-05T14:40:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.