Classifying Crop Types using Gaussian Bayesian Models and Neural
Networks on GHISACONUS USGS data from NASA Hyperspectral Satellite Imagery
- URL: http://arxiv.org/abs/2207.11228v1
- Date: Thu, 21 Jul 2022 14:22:05 GMT
- Title: Classifying Crop Types using Gaussian Bayesian Models and Neural
Networks on GHISACONUS USGS data from NASA Hyperspectral Satellite Imagery
- Authors: Bill Basener
- Abstract summary: We provide classification methods for determining crop type in the USGS GHISACONUS data.
We apply standard LDA and QDA as well as Bayesian custom versions that compute the joint probability of crop type and stage.
We also test a single layer neural network with dropout on the data, which performs comparable to LDA but not as well as the Bayesian methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hyperspectral Imagining is a type of digital imaging in which each pixel
contains typically hundreds of wavelengths of light providing spectroscopic
information about the materials present in the pixel. In this paper we provide
classification methods for determining crop type in the USGS GHISACONUS data,
which contains around 7,000 pixel spectra from the five major U.S. agricultural
crops (winter wheat, rice, corn, soybeans, and cotton) collected by the NASA
Hyperion satellite, and includes the spectrum, geolocation, crop type, and
stage of growth for each pixel. We apply standard LDA and QDA as well as
Bayesian custom versions that compute the joint probability of crop type and
stage, and then the marginal probability for crop type, outperforming the
non-Bayesian methods. We also test a single layer neural network with dropout
on the data, which performs comparable to LDA and QDA but not as well as the
Bayesian methods.
Related papers
- A UAV-Based Multispectral and RGB Dataset for Multi-Stage Paddy Crop Monitoring in Indian Agricultural Fields [5.329135985749616]
We present a large-scale unmanned aerial vehicle (UAV)-based RGB and multispectral image dataset collected over paddy fields in the region, Andhra Pradesh, India.<n>We used a 20-megapixel RGB camera and a 5-megapixel four-band multispectral camera capturing red, green, red-edge, and near-infrared bands.<n>Our dataset comprises of 42,430 raw images (415 GB) captured over 5 acres with 1 cm/pixel ground sampling distance.
arXiv Detail & Related papers (2026-01-03T06:19:18Z) - Tomato Multi-Angle Multi-Pose Dataset for Fine-Grained Phenotyping [10.807010511060042]
TomatoMAP is a comprehensive dataset for Solanum lycopersicum.<n>Our dataset contains 64,464 RGB images that capture 12 different plant poses from four camera elevation angles.<n>We provide 3,616 high-resolution image subset with pixel-wise semantic and instance segmentation annotations for fine-grained phenotyping.
arXiv Detail & Related papers (2025-07-15T12:56:13Z) - CARL: Camera-Agnostic Representation Learning for Spectral Image Analysis [75.25966323298003]
Spectral imaging offers promising applications across diverse domains, including medicine and urban scene understanding.
variability in channel dimensionality and captured wavelengths among spectral cameras impede the development of AI-driven methodologies.
We introduce $textbfCARL$, a model for $textbfC$amera-$textbfA$gnostic $textbfR$esupervised $textbfL$ across RGB, multispectral, and hyperspectral imaging modalities.
arXiv Detail & Related papers (2025-04-27T13:06:40Z) - Spectral Image Data Fusion for Multisource Data Augmentation [44.99833362998488]
Multispectral and hyperspectral images are increasingly popular in different research fields, such as remote sensing, astronomical imaging, or precision agriculture.
The amount of free data available to perform machine learning tasks is relatively small.
Artificial intelligence models developed in the area of spectral imaging require input images with a fixed spectral signature.
arXiv Detail & Related papers (2024-04-05T13:40:18Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - The Canadian Cropland Dataset: A New Land Cover Dataset for
Multitemporal Deep Learning Classification in Agriculture [0.8602553195689513]
temporal patch-based dataset of Canadian croplands enriched with labels retrieved from the Canadian Annual Crop Inventory.
The dataset contains 78,536 manually verified high-resolution spatial images from 10 crop classes collected over four crop production years.
As a benchmark, we provide models and source code that allow a user to predict the crop class using a single image (ResNet, DenseNet, EfficientNet) or a sequence of images (LRCN, 3D-CNN) from the same location.
arXiv Detail & Related papers (2023-05-31T18:40:15Z) - Evaluation of the potential of Near Infrared Hyperspectral Imaging for
monitoring the invasive brown marmorated stink bug [53.682955739083056]
The brown marmorated stink bug (BMSB), Halyomorpha halys, is an invasive insect pest of global importance that damages several crops.
The present study consists in a preliminary evaluation at the laboratory level of Near Infrared Hyperspectral Imaging (NIR-HSI) as a possible technology to detect BMSB specimens.
arXiv Detail & Related papers (2023-01-19T11:37:20Z) - Probabilistic Deep Metric Learning for Hyperspectral Image
Classification [91.5747859691553]
This paper proposes a probabilistic deep metric learning framework for hyperspectral image classification.
It aims to predict the category of each pixel for an image captured by hyperspectral sensors.
Our framework can be readily applied to existing hyperspectral image classification methods.
arXiv Detail & Related papers (2022-11-15T17:57:12Z) - Affinity Feature Strengthening for Accurate, Complete and Robust Vessel
Segmentation [48.638327652506284]
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms.
We present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach.
arXiv Detail & Related papers (2022-11-12T05:39:17Z) - High-Resolution UAV Image Generation for Sorghum Panicle Detection [23.88932181375298]
We present an approach that uses synthetic training images from generative adversarial networks (GANs) for data augmentation to enhance the performance of Sorghum panicle detection and counting.
Our method can generate synthetic high-resolution UAV RGB images with panicle labels by using image-to-image translation GANs with a limited ground truth dataset of real UAV RGB images.
arXiv Detail & Related papers (2022-05-08T20:26:56Z) - CalCROP21: A Georeferenced multi-spectral dataset of Satellite Imagery
and Crop Labels [20.951184753721503]
The U.S. Department of Agriculture (USDA) annually releases the Cropland Data Layer (CDL) which contains crop labels at 30m resolution for the entire U.S.
We create a new semantic segmentation benchmark dataset, which we call CalCROP21, for the diverse crops in the Central Valley region of California at 10m spatial resolution.
arXiv Detail & Related papers (2021-07-26T22:20:16Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - A Novel Spatial-Spectral Framework for the Classification of
Hyperspectral Satellite Imagery [1.066048003460524]
We present a novel framework that takes into account both the spectral and spatial information contained in the data for land cover classification.
Our proposed methodology performs better than the earlier approaches by achieving an accuracy of 99.52% and 98.31% on the Pavia University and the Indian Pines datasets respectively.
arXiv Detail & Related papers (2020-07-22T16:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.