A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For
Vegetation Recognition From Hyperspectral Imagery
- URL: http://arxiv.org/abs/2004.08886v2
- Date: Wed, 14 Apr 2021 14:26:14 GMT
- Title: A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For
Vegetation Recognition From Hyperspectral Imagery
- Authors: Yue Shi, Liangxiu Han, Wenjiang Huang, Sheng Chang, Yingying Dong,
Darren Dancey, Lianghao Han
- Abstract summary: This study proposes a novel interpretable deep learning model -- a biologically interpretable two-stage deep neural network (BIT-DNN)
The proposed model has been compared with five state-of-the-art deep learning models.
- Score: 3.708283803668841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spectral-spatial based deep learning models have recently proven to be
effective in hyperspectral image (HSI) classification for various earth
monitoring applications such as land cover classification and agricultural
monitoring. However, due to the nature of "black-box" model representation, how
to explain and interpret the learning process and the model decision,
especially for vegetation classification, remains an open challenge. This study
proposes a novel interpretable deep learning model -- a biologically
interpretable two-stage deep neural network (BIT-DNN), by incorporating the
prior-knowledge (i.e. biophysical and biochemical attributes and their
hierarchical structures of target entities) based spectral-spatial feature
transformation into the proposed framework, capable of achieving both high
accuracy and interpretability on HSI based classification tasks. The proposed
model introduces a two-stage feature learning process: in the first stage, an
enhanced interpretable feature block extracts the low-level spectral features
associated with the biophysical and biochemical attributes of target entities;
and in the second stage, an interpretable capsule block extracts and
encapsulates the high-level joint spectral-spatial features representing the
hierarchical structure of biophysical and biochemical attributes of these
target entities, which provides the model an improved performance on
classification and intrinsic interpretability with reduced computational
complexity. We have tested and evaluated the model using four real HSI datasets
for four separate tasks (i.e. plant species classification, land cover
classification, urban scene recognition, and crop disease recognition tasks).
The proposed model has been compared with five state-of-the-art deep learning
models.
Related papers
- Segmentation by Factorization: Unsupervised Semantic Segmentation for Pathology by Factorizing Foundation Model Features [0.0]
Factorization (F-SEG) is an unsupervised segmentation method for pathology.
It generates segmentation masks from pre-trained deep learning models.
arXiv Detail & Related papers (2024-09-09T15:11:45Z) - Neural Echos: Depthwise Convolutional Filters Replicate Biological
Receptive Fields [56.69755544814834]
We present evidence suggesting that depthwise convolutional kernels are effectively replicating the biological receptive fields observed in the mammalian retina.
We propose a scheme that draws inspiration from the biological receptive fields.
arXiv Detail & Related papers (2024-01-18T18:06:22Z) - The Importance of Downstream Networks in Digital Pathology Foundation Models [1.689369173057502]
We evaluate seven feature extractor models across three different datasets with 162 different aggregation model configurations.
We find that the performance of many current feature extractor models is notably similar.
arXiv Detail & Related papers (2023-11-29T16:54:25Z) - DiffSpectralNet : Unveiling the Potential of Diffusion Models for
Hyperspectral Image Classification [6.521187080027966]
We propose a new network called DiffSpectralNet, which combines diffusion and transformer techniques.
First, we use an unsupervised learning framework based on the diffusion model to extract both high-level and low-level spectral-spatial features.
The diffusion method is capable of extracting diverse and meaningful spectral-spatial features, leading to improvement in HSI classification.
arXiv Detail & Related papers (2023-10-29T15:26:37Z) - Tertiary Lymphoid Structures Generation through Graph-based Diffusion [54.37503714313661]
In this work, we leverage state-of-the-art graph-based diffusion models to generate biologically meaningful cell-graphs.
We show that the adopted graph diffusion model is able to accurately learn the distribution of cells in terms of their tertiary lymphoid structures (TLS) content.
arXiv Detail & Related papers (2023-10-10T14:37:17Z) - ForamViT-GAN: Exploring New Paradigms in Deep Learning for
Micropaleontological Image Analysis [0.0]
We propose a novel deep learning workflow combining hierarchical vision transformers with style-based generative adversarial network algorithms.
Our study shows that this workflow can generate high-resolution images with a high signal-to-noise ratio (39.1 dB) and realistic synthetic images with a Frechet distance similarity score of 14.88.
For the first time, we performed few-shot semantic segmentation of different foraminifera chambers on both generated and synthetic images with high accuracy.
arXiv Detail & Related papers (2023-04-09T18:49:38Z) - Unified Framework for Histopathology Image Augmentation and Classification via Generative Models [6.404713841079193]
We propose an innovative unified framework that integrates the data generation and model training stages into a unified process.
Our approach utilizes a pure Vision Transformer (ViT)-based conditional Generative Adversarial Network (cGAN) model to simultaneously handle both image synthesis and classification.
Our experiments show that our unified synthetic augmentation framework consistently enhances the performance of histopathology image classification models.
arXiv Detail & Related papers (2022-12-20T03:40:44Z) - Structure-Aware Feature Generation for Zero-Shot Learning [108.76968151682621]
We introduce a novel structure-aware feature generation scheme, termed as SA-GAN, to account for the topological structure in learning both the latent space and the generative networks.
Our method significantly enhances the generalization capability on unseen-classes and consequently improve the classification performance.
arXiv Detail & Related papers (2021-08-16T11:52:08Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.