SRL-SOA: Self-Representation Learning with Sparse 1D-Operational
Autoencoder for Hyperspectral Image Band Selection
- URL: http://arxiv.org/abs/2202.09918v1
- Date: Sun, 20 Feb 2022 22:17:01 GMT
- Title: SRL-SOA: Self-Representation Learning with Sparse 1D-Operational
Autoencoder for Hyperspectral Image Band Selection
- Authors: Mete Ahishali, Serkan Kiranyaz, Iftikhar Ahmad, Moncef Gabbouj
- Abstract summary: We propose a novel framework for the band selection problem: Self-Representation Learning (SRL) with Sparse 1D-Operational Autoencoder (SOA)
The proposed SLR-SOA approach introduces a novel autoencoder model, SOA, that is designed to learn a representation domain where the data are sparsely represented.
We show that the proposed SRL-SOA band selection approach outperforms the competing methods over two HSI data including Indian Pines and Salinas-A.
- Score: 24.003035094461666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The band selection in the hyperspectral image (HSI) data processing is an
important task considering its effect on the computational complexity and
accuracy. In this work, we propose a novel framework for the band selection
problem: Self-Representation Learning (SRL) with Sparse 1D-Operational
Autoencoder (SOA). The proposed SLR-SOA approach introduces a novel autoencoder
model, SOA, that is designed to learn a representation domain where the data
are sparsely represented. Moreover, the network composes of 1D-operational
layers with the non-linear neuron model. Hence, the learning capability of
neurons (filters) is greatly improved with shallow architectures. Using compact
architectures is especially crucial in autoencoders as they tend to overfit
easily because of their identity mapping objective. Overall, we show that the
proposed SRL-SOA band selection approach outperforms the competing methods over
two HSI data including Indian Pines and Salinas-A considering the achieved land
cover classification accuracies. The software implementation of the SRL-SOA
approach is shared publicly at https://github.com/meteahishali/SRL-SOA.
Related papers
- Private Training & Data Generation by Clustering Embeddings [74.00687214400021]
Differential privacy (DP) provides a robust framework for protecting individual data.<n>We introduce a novel principled method for DP synthetic image embedding generation.<n> Empirically, a simple two-layer neural network trained on synthetically generated embeddings achieves state-of-the-art (SOTA) classification accuracy.
arXiv Detail & Related papers (2025-06-20T00:17:14Z) - Avoiding $\mathbf{exp(R_{max})}$ scaling in RLHF through Preference-based Exploration [20.76451379043945]
Reinforcement Learning from Human Feedback (RLHF) has emerged as a pivotal technique for large language model (LLM) alignment.
This paper studies the setting of online RLHF and focus on improving sample efficiency.
arXiv Detail & Related papers (2025-02-02T04:40:04Z) - HSLiNets: Hyperspectral Image and LiDAR Data Fusion Using Efficient Dual Non-Linear Feature Learning Networks [7.06787067270941]
The integration of hyperspectral imaging (HSI) and LiDAR data within new linear feature spaces offers a promising solution to the challenges posed by the high-dimensionality and redundancy inherent in HSIs.
This study introduces a dual linear fused space framework that capitalizes on bidirectional reversed convolutional neural network (CNN) pathways, coupled with a specialized spatial analysis block.
The proposed method not only enhances data processing and classification accuracy, but also mitigates the computational burden typically associated with advanced models such as Transformers.
arXiv Detail & Related papers (2024-11-30T01:08:08Z) - Efficient infusion of self-supervised representations in Automatic Speech Recognition [1.2972104025246092]
Self-supervised learned (SSL) models such as Wav2vec and HuBERT yield state-of-the-art results on speech-related tasks.
We propose two simple approaches that use framewise addition and (2) cross-attention mechanisms to efficiently incorporate the representations from the SSL model into the ASR architecture.
Our approach results in faster training and yields significant performance gains on the Librispeech and Tedlium datasets.
arXiv Detail & Related papers (2024-04-19T05:01:12Z) - ADASR: An Adversarial Auto-Augmentation Framework for Hyperspectral and
Multispectral Data Fusion [54.668445421149364]
Deep learning-based hyperspectral image (HSI) super-resolution aims to generate high spatial resolution HSI (HR-HSI) by fusing hyperspectral image (HSI) and multispectral image (MSI) with deep neural networks (DNNs)
In this letter, we propose a novel adversarial automatic data augmentation framework ADASR that automatically optimize and augments HSI-MSI sample pairs to enrich data diversity for HSI-MSI fusion.
arXiv Detail & Related papers (2023-10-11T07:30:37Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Systematic Architectural Design of Scale Transformed Attention Condenser
DNNs via Multi-Scale Class Representational Response Similarity Analysis [93.0013343535411]
We propose a novel type of analysis called Multi-Scale Class Representational Response Similarity Analysis (ClassRepSim)
We show that adding STAC modules to ResNet style architectures can result in up to a 1.6% increase in top-1 accuracy.
Results from ClassRepSim analysis can be used to select an effective parameterization of the STAC module resulting in competitive performance.
arXiv Detail & Related papers (2023-06-16T18:29:26Z) - KXNet: A Model-Driven Deep Neural Network for Blind Super-Resolution [57.882146858582175]
We propose a model-driven deep neural network, called KXNet, for blind SISR.
The proposed KXNet is fully integrated with the inherent physical mechanism underlying this SISR task.
Experiments on synthetic and real data finely demonstrate the superior accuracy and generality of our method.
arXiv Detail & Related papers (2022-09-21T12:22:50Z) - Lightweight Image Super-Resolution with Hierarchical and Differentiable
Neural Architecture Search [38.83764580480486]
Single Image Super-Resolution (SISR) tasks have achieved significant performance with deep neural networks.
We propose a novel differentiable Neural Architecture Search (NAS) approach on both the cell-level and network-level to search for lightweight SISR models.
arXiv Detail & Related papers (2021-05-09T13:30:16Z) - Dynamic RAN Slicing for Service-Oriented Vehicular Networks via
Constrained Learning [40.5603189901241]
We investigate a radio access network (RAN) slicing problem for Internet of vehicles (IoV) services with different quality of service (QoS) requirements.
A dynamic RAN slicing framework is presented to dynamically allocate radio spectrum and computing resource.
We show that the RAWS effectively reduces the system cost while satisfying requirements with a high probability, as compared with benchmarks.
arXiv Detail & Related papers (2020-12-03T15:08:38Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z) - Reinforcement Learning with Augmented Data [97.42819506719191]
We present Reinforcement Learning with Augmented Data (RAD), a simple plug-and-play module that can enhance most RL algorithms.
We show that augmentations such as random translate, crop, color jitter, patch cutout, random convolutions, and amplitude scale can enable simple RL algorithms to outperform complex state-of-the-art methods.
arXiv Detail & Related papers (2020-04-30T17:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.