High-Resolution UAV Image Generation for Sorghum Panicle Detection
- URL: http://arxiv.org/abs/2205.03947v1
- Date: Sun, 8 May 2022 20:26:56 GMT
- Title: High-Resolution UAV Image Generation for Sorghum Panicle Detection
- Authors: Enyu Cai, Zhankun Luo, Sriram Baireddy, Jiaqi Guo, Changye Yang,
Edward J. Delp
- Abstract summary: We present an approach that uses synthetic training images from generative adversarial networks (GANs) for data augmentation to enhance the performance of Sorghum panicle detection and counting.
Our method can generate synthetic high-resolution UAV RGB images with panicle labels by using image-to-image translation GANs with a limited ground truth dataset of real UAV RGB images.
- Score: 23.88932181375298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The number of panicles (or heads) of Sorghum plants is an important
phenotypic trait for plant development and grain yield estimation. The use of
Unmanned Aerial Vehicles (UAVs) enables the capability of collecting and
analyzing Sorghum images on a large scale. Deep learning can provide methods
for estimating phenotypic traits from UAV images but requires a large amount of
labeled data. The lack of training data due to the labor-intensive ground
truthing of UAV images causes a major bottleneck in developing methods for
Sorghum panicle detection and counting. In this paper, we present an approach
that uses synthetic training images from generative adversarial networks (GANs)
for data augmentation to enhance the performance of Sorghum panicle detection
and counting. Our method can generate synthetic high-resolution UAV RGB images
with panicle labels by using image-to-image translation GANs with a limited
ground truth dataset of real UAV RGB images. The results show the improvements
in panicle detection and counting using our data augmentation approach.
Related papers
- Improving Interpretability and Robustness for the Detection of AI-Generated Images [6.116075037154215]
We analyze existing state-of-the-art AIGI detection methods based on frozen CLIP embeddings.
We show how to interpret them, shedding light on how images produced by various AI generators differ from real ones.
arXiv Detail & Related papers (2024-06-21T10:33:09Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation [69.42764583465508]
We explore the potential of generative image diffusion to address the scarcity of annotated data in earth observation tasks.
To the best of our knowledge, we are the first to generate both images and corresponding masks for satellite segmentation.
arXiv Detail & Related papers (2024-03-25T10:30:22Z) - Diffusion Facial Forgery Detection [56.69763252655695]
This paper introduces DiFF, a comprehensive dataset dedicated to face-focused diffusion-generated images.
We conduct extensive experiments on the DiFF dataset via a human test and several representative forgery detection methods.
The results demonstrate that the binary detection accuracy of both human observers and automated detectors often falls below 30%.
arXiv Detail & Related papers (2024-01-29T03:20:19Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - UAV-Sim: NeRF-based Synthetic Data Generation for UAV-based Perception [62.71374902455154]
We leverage recent advancements in neural rendering to improve static and dynamic novelview UAV-based image rendering.
We demonstrate a considerable performance boost when a state-of-the-art detection model is optimized primarily on hybrid sets of real and synthetic data.
arXiv Detail & Related papers (2023-10-25T00:20:37Z) - Semi-Supervised Object Detection for Sorghum Panicles in UAV Imagery [22.441677896192363]
sorghum panicle is an important trait related to grain yield and plant development.
Current deep-learning-based object detection methods for panicles require a large amount of training data.
We present an approach to reduce the amount of training data for sorghum panicle detection via semi-supervised learning.
arXiv Detail & Related papers (2023-05-16T21:24:26Z) - Generative models-based data labeling for deep networks regression:
application to seed maturity estimation from UAV multispectral images [3.6868861317674524]
Monitoring seed maturity is an increasing challenge in agriculture due to climate change and more restrictive practices.
Traditional methods are based on limited sampling in the field and analysis in laboratory.
We propose a method for estimating parsley seed maturity using multispectral UAV imagery, with a new approach for automatic data labeling.
arXiv Detail & Related papers (2022-08-09T09:06:51Z) - Agricultural Plant Cataloging and Establishment of a Data Framework from
UAV-based Crop Images by Computer Vision [4.0382342610484425]
We present a hands-on workflow for the automatized temporal and spatial identification and individualization of crop images from UAVs.
The presented approach improves analysis and interpretation of UAV data in agriculture significantly.
arXiv Detail & Related papers (2022-01-08T21:14:07Z) - Field-Based Plot Extraction Using UAV RGB Images [18.420863296523727]
Unmanned Aerial Vehicles (UAVs) have become popular for use in plant phenotyping of field based crops, such as maize and sorghum.
We propose a new plot extraction method that will segment a UAV image into plots.
arXiv Detail & Related papers (2021-09-01T22:04:59Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.