Synthesizing Forestry Images Conditioned on Plant Phenotype Using a Generative Adversarial Network
- URL: http://arxiv.org/abs/2307.03789v3
- Date: Thu, 16 Jan 2025 04:13:10 GMT
- Title: Synthesizing Forestry Images Conditioned on Plant Phenotype Using a Generative Adversarial Network
- Authors: Debasmita Pal, Arun Ross,
- Abstract summary: This work focuses on generating synthetic forestry images that satisfy certain phenotypic attributes, viz. canopy greenness.
We harness a Generative Adversarial Network (GAN) to synthesize biologically plausible and phenotypically stable forestry images.
- Score: 10.902536447343465
- License:
- Abstract: Plant phenology and phenotype prediction using remote sensing data are increasingly gaining attention within the plant science community as a promising approach to enhance agricultural productivity. This work focuses on generating synthetic forestry images that satisfy certain phenotypic attributes, viz. canopy greenness. We harness a Generative Adversarial Network (GAN) to synthesize biologically plausible and phenotypically stable forestry images conditioned on the greenness of vegetation (a continuous attribute) over a specific region of interest, describing a particular vegetation type in a mixed forest. The training data is based on the automated digital camera imagery provided by the National Ecological Observatory Network (NEON) and processed by the PhenoCam Network. Our method helps render the appearance of forest sites specific to a greenness value. The synthetic images are subsequently utilized to predict another phenotypic attribute, viz., redness of plants. The quality of the synthetic images is assessed using the Structural SIMilarity (SSIM) index and Fr\'echet Inception Distance (FID). Further, the greenness and redness indices of the synthetic images are compared against those of the original images using Root Mean Squared Percentage Error (RMSPE) to evaluate their accuracy and integrity. The generalizability and scalability of our proposed GAN model are established by effectively transforming it to generate synthetic images for other forest sites and vegetation types. From a broader perspective, this approach could be leveraged to visualize forestry based on different phenotypic attributes in the context of various environmental parameters.
Related papers
- An Interpretable Neural Network for Vegetation Phenotyping with Visualization of Trait-Based Spectral Features [0.0]
We present an interpretable neural network trained on the UPWINS spectral library which contains spectra with rich metadata across variation in species, health, growth stage, annual variation, and environmental conditions for 13 selected indicator species and natural common background species.
We show that the neurons in the network learn spectral indicators for chemical and physiological traits through visualization of the network weights, and we show how these traits are combined by the network for species identification with an accuracy around 90% on a test set.
arXiv Detail & Related papers (2024-07-14T21:20:37Z) - BonnBeetClouds3D: A Dataset Towards Point Cloud-based Organ-level
Phenotyping of Sugar Beet Plants under Field Conditions [30.27773980916216]
Agricultural production is facing severe challenges in the next decades induced by climate change and the need for sustainability.
Advancements in field management through non-chemical weeding by robots in combination with monitoring of crops by autonomous unmanned aerial vehicles (UAVs) are helpful to address these challenges.
The analysis of plant traits, called phenotyping, is an essential activity in plant breeding, it however involves a great amount of manual labor.
arXiv Detail & Related papers (2023-12-22T14:06:44Z) - Intriguing properties of synthetic images: from generative adversarial
networks to diffusion models [19.448196464632]
It is important to gain insight into which image features better discriminate fake images from real ones.
In this paper we report on our systematic study of a large number of image generators of different families, aimed at discovering the most forensically relevant characteristics of real and generated images.
arXiv Detail & Related papers (2023-04-13T11:13:19Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Neuroevolution-based Classifiers for Deforestation Detection in Tropical
Forests [62.997667081978825]
Millions of hectares of tropical forests are lost every year due to deforestation or degradation.
Monitoring and deforestation detection programs are in use, in addition to public policies for the prevention and punishment of criminals.
This paper proposes the use of pattern classifiers based on neuroevolution technique (NEAT) in tropical forest deforestation detection tasks.
arXiv Detail & Related papers (2022-08-23T16:04:12Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Temporal Prediction and Evaluation of Brassica Growth in the Field using
Conditional Generative Adversarial Networks [1.2926587870771542]
The prediction of plant growth is a major challenge, as it is affected by numerous and highly variable environmental factors.
This paper proposes a novel monitoring approach that comprises high- throughput imaging sensor measurements and their automatic analysis.
Our approach's core is a novel machine learning-based growth model based on conditional generative adversarial networks.
arXiv Detail & Related papers (2021-05-17T13:00:01Z) - Identity-Aware CycleGAN for Face Photo-Sketch Synthesis and Recognition [61.87842307164351]
We first propose an Identity-Aware CycleGAN (IACycleGAN) model that applies a new perceptual loss to supervise the image generation network.
It improves CycleGAN on photo-sketch synthesis by paying more attention to the synthesis of key facial regions, such as eyes and nose.
We develop a mutual optimization procedure between the synthesis model and the recognition model, which iteratively synthesizes better images by IACycleGAN.
arXiv Detail & Related papers (2021-03-30T01:30:08Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.