CD&S Dataset: Handheld Imagery Dataset Acquired Under Field Conditions
for Corn Disease Identification and Severity Estimation
- URL: http://arxiv.org/abs/2110.12084v1
- Date: Fri, 22 Oct 2021 22:33:51 GMT
- Title: CD&S Dataset: Handheld Imagery Dataset Acquired Under Field Conditions
for Corn Disease Identification and Severity Estimation
- Authors: Aanis Ahmad, Dharmendra Saraswat, Aly El Gamal, and Gurmukh Johal
- Abstract summary: The Corn Disease and Severity dataset consisted of 4455 total images comprising of 2112 field images and 2343 augmented images.
For training disease identification models, half of the imagery data for each disease was annotated using bounding boxes.
For severity estimation, an additional 515 raw images for NLS were acquired and categorized into severity classes ranging from 1 (resistant) to 5 (susceptible)
- Score: 5.949779668853555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate disease identification and its severity estimation is an important
consideration for disease management. Deep learning-based solutions for disease
management using imagery datasets are being increasingly explored by the
research community. However, most reported studies have relied on imagery
datasets that were acquired under controlled lab conditions. As a result, such
models lacked the ability to identify diseases in the field. Therefore, to
train a robust deep learning model for field use, an imagery dataset was
created using raw images acquired under field conditions using a handheld
sensor and augmented images with varying backgrounds. The Corn Disease and
Severity (CD&S) dataset consisted of 511, 524, and 562, field acquired raw
images, corresponding to three common foliar corn diseases, namely Northern
Leaf Blight (NLB), Gray Leaf Spot (GLS), and Northern Leaf Spot (NLS),
respectively. For training disease identification models, half of the imagery
data for each disease was annotated using bounding boxes and also used to
generate 2343 additional images through augmentation using three different
backgrounds. For severity estimation, an additional 515 raw images for NLS were
acquired and categorized into severity classes ranging from 1 (resistant) to 5
(susceptible). Overall, the CD&S dataset consisted of 4455 total images
comprising of 2112 field images and 2343 augmented images.
Related papers
- CO2Wounds-V2: Extended Chronic Wounds Dataset From Leprosy Patients [57.31670527557228]
This paper introduces the CO2Wounds-V2 dataset, an extended collection of RGB wound images from leprosy patients.
It aims to enhance the development and testing of image-processing algorithms in the medical field.
arXiv Detail & Related papers (2024-08-20T13:21:57Z) - Common and Rare Fundus Diseases Identification Using Vision-Language Foundation Model with Knowledge of Over 400 Diseases [57.27458882764811]
Previous foundation models for retinal images were pre-trained with limited disease categories and knowledge base.
To RetiZero's pre-training, we compiled 341,896 fundus images paired with text descriptions, sourced from public datasets, ophthalmic literature, and online resources.
RetiZero exhibits superior performance in several downstream tasks, including zero-shot disease recognition, image-to-image retrieval, and internal- and cross-domain disease identification.
arXiv Detail & Related papers (2024-06-13T16:53:57Z) - STimage-1K4M: A histopathology image-gene expression dataset for spatial transcriptomics [8.881820519705592]
STimage-1K4M is a novel dataset designed to bridge the gap by providing genomic features for sub-tile images.
With 4,293,195 pairs of sub-tile images and gene expressions, STimage-1K4M offers unprecedented granularity.
arXiv Detail & Related papers (2024-06-10T15:48:07Z) - RadGenome-Chest CT: A Grounded Vision-Language Dataset for Chest CT Analysis [56.57177181778517]
RadGenome-Chest CT is a large-scale, region-guided 3D chest CT interpretation dataset based on CT-RATE.
We leverage the latest powerful universal segmentation and large language models to extend the original datasets.
arXiv Detail & Related papers (2024-04-25T17:11:37Z) - On the notion of Hallucinations from the lens of Bias and Validity in
Synthetic CXR Images [0.35998666903987897]
Generative models, such as diffusion models, aim to mitigate data quality and clinical information disparities.
At Stanford, researchers explored the utility of a fine-tuned Stable Diffusion model (RoentGen) for medical imaging data augmentation.
We leveraged RoentGen to produce synthetic Chest-XRay (CXR) images and conducted assessments on bias, validity, and hallucinations.
arXiv Detail & Related papers (2023-12-12T04:41:20Z) - Cross-modulated Few-shot Image Generation for Colorectal Tissue
Classification [58.147396879490124]
Our few-shot generation method, named XM-GAN, takes one base and a pair of reference tissue images as input and generates high-quality yet diverse images.
To the best of our knowledge, we are the first to investigate few-shot generation in colorectal tissue images.
arXiv Detail & Related papers (2023-04-04T17:50:30Z) - Detection of multiple retinal diseases in ultra-widefield fundus images
using deep learning: data-driven identification of relevant regions [2.20200533591633]
Ultra-widefield (UWF) imaging is a promising modality that captures a larger retinal field of view.
Previous studies showed that deep learning (DL) models are effective for detecting retinal disease in UWF images.
We propose a DL model that can recognise multiple retinal diseases under more realistic conditions.
arXiv Detail & Related papers (2022-03-11T17:33:33Z) - REFUGE2 Challenge: Treasure for Multi-Domain Learning in Glaucoma
Assessment [45.41988445653055]
REFUGE2 challenge released 2,000 color fundus images of four models, including Zeiss, Canon, Kowa and Topcon.
Three sub-tasks were designed in the challenge, including glaucoma classification, cup/optic disc segmentation, and macular fovea localization.
This article summarizes the methods of some of the finalists and analyzes their results.
arXiv Detail & Related papers (2022-02-18T02:56:21Z) - Where is the disease? Semi-supervised pseudo-normality synthesis from an
abnormal image [24.547317269668312]
We propose a Semi-supervised Medical Image generative LEarning network (SMILE) to generate realistic pseudo-normal images.
Our model outperforms the best state-of-the-art model by up to 6% for data augmentation task and 3% in generating high-quality images.
arXiv Detail & Related papers (2021-06-24T05:56:41Z) - A Benchmark for Studying Diabetic Retinopathy: Segmentation, Grading,
and Transferability [76.64661091980531]
People with diabetes are at risk of developing diabetic retinopathy (DR)
Computer-aided DR diagnosis is a promising tool for early detection of DR and severity grading.
This dataset has 1,842 images with pixel-level DR-related lesion annotations, and 1,000 images with image-level labels graded by six board-certified ophthalmologists.
arXiv Detail & Related papers (2020-08-22T07:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.