Data-driven Crop Growth Simulation on Time-varying Generated Images
using Multi-conditional Generative Adversarial Networks
- URL: http://arxiv.org/abs/2312.03443v1
- Date: Wed, 6 Dec 2023 11:54:50 GMT
- Title: Data-driven Crop Growth Simulation on Time-varying Generated Images
using Multi-conditional Generative Adversarial Networks
- Authors: Lukas Drees, Dereje T. Demie, Madhuri R. Paul, Johannes Leonhardt,
Sabine J. Seidel, Thomas F. D\"oring, Ribana Roscher
- Abstract summary: We present a two-stage framework consisting first of an image prediction model and second of a growth estimation model.
The image prediction model is a conditional Wasserstein generative adversarial network (CWGAN)
In the generator of this model, conditional batch normalization (CBN) is used to integrate different conditions along with the input image.
These images are used by the second part of the framework for plant phenotyping by deriving plant-specific traits.
- Score: 2.513679466277441
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image-based crop growth modeling can substantially contribute to precision
agriculture by revealing spatial crop development over time, which allows an
early and location-specific estimation of relevant future plant traits, such as
leaf area or biomass. A prerequisite for realistic and sharp crop image
generation is the integration of multiple growth-influencing conditions in a
model, such as an image of an initial growth stage, the associated growth time,
and further information about the field treatment. We present a two-stage
framework consisting first of an image prediction model and second of a growth
estimation model, which both are independently trained. The image prediction
model is a conditional Wasserstein generative adversarial network (CWGAN). In
the generator of this model, conditional batch normalization (CBN) is used to
integrate different conditions along with the input image. This allows the
model to generate time-varying artificial images dependent on multiple
influencing factors of different kinds. These images are used by the second
part of the framework for plant phenotyping by deriving plant-specific traits
and comparing them with those of non-artificial (real) reference images. For
various crop datasets, the framework allows realistic, sharp image predictions
with a slight loss of quality from short-term to long-term predictions.
Simulations of varying growth-influencing conditions performed with the trained
framework provide valuable insights into how such factors relate to crop
appearances, which is particularly useful in complex, less explored crop
mixture systems. Further results show that adding process-based simulated
biomass as a condition increases the accuracy of the derived phenotypic traits
from the predicted images. This demonstrates the potential of our framework to
serve as an interface between an image- and process-based crop growth model.
Related papers
- Generative Plant Growth Simulation from Sequence-Informed Environmental Conditions [0.32985979395737786]
A plant growth simulation can be characterized as a reconstructed visual representation of a plant or plant system.
We introduce a sequence-informed plant growth simulation framework (SI-PGS) that employs a conditional generative model to implicitly learn a distribution of possible plant representations.
We demonstrate that SI-PGS is able to capture temporal dependencies and continuously generate realistic frames of plant growth.
arXiv Detail & Related papers (2024-05-23T17:06:46Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - Counterfactual Image Editing [54.21104691749547]
Counterfactual image editing is an important task in generative AI, which asks how an image would look if certain features were different.
We formalize the counterfactual image editing task using formal language, modeling the causal relationships between latent generative factors and images.
We develop an efficient algorithm to generate counterfactual images by leveraging neural causal models.
arXiv Detail & Related papers (2024-02-07T20:55:39Z) - Combining Satellite and Weather Data for Crop Type Mapping: An Inverse
Modelling Approach [23.23933321161625]
We propose a deep learning model that combines weather (Daymet) and satellite imagery (Sentinel-2) to generate accurate crop maps.
We show that our approach provides significant improvements over existing algorithms that solely rely on spectral imagery.
We conclude by correlating our results with crop phenology to show that WSTATT is able to capture physical properties of crop growth.
arXiv Detail & Related papers (2024-01-29T04:15:22Z) - PlantPlotGAN: A Physics-Informed Generative Adversarial Network for
Plant Disease Prediction [2.7409168462107347]
We propose PlantPlotGAN, a physics-informed generative model capable of creating synthetic multispectral plot images with realistic vegetation indices.
The results demonstrate that the synthetic imagery generated from PlantPlotGAN outperforms state-of-the-art methods regarding the Fr'echet inception distance.
arXiv Detail & Related papers (2023-10-27T16:56:28Z) - Steered Diffusion: A Generalized Framework for Plug-and-Play Conditional
Image Synthesis [62.07413805483241]
Steered Diffusion is a framework for zero-shot conditional image generation using a diffusion model trained for unconditional generation.
We present experiments using steered diffusion on several tasks including inpainting, colorization, text-guided semantic editing, and image super-resolution.
arXiv Detail & Related papers (2023-09-30T02:03:22Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Auto-regressive Image Synthesis with Integrated Quantization [55.51231796778219]
This paper presents a versatile framework for conditional image generation.
It incorporates the inductive bias of CNNs and powerful sequence modeling of auto-regression.
Our method achieves superior diverse image generation performance as compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-21T22:19:17Z) - An Applied Deep Learning Approach for Estimating Soybean Relative
Maturity from UAV Imagery to Aid Plant Breeding Decisions [7.4022258821325115]
We develop a robust and automatic approach for estimating the relative maturity of soybeans using a time series of UAV images.
An end-to-end hybrid model combining Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) is proposed to extract features.
Results suggest the effectiveness of our proposed CNN-LSTM model compared to the local regression method.
arXiv Detail & Related papers (2021-08-02T14:53:58Z) - Temporal Prediction and Evaluation of Brassica Growth in the Field using
Conditional Generative Adversarial Networks [1.2926587870771542]
The prediction of plant growth is a major challenge, as it is affected by numerous and highly variable environmental factors.
This paper proposes a novel monitoring approach that comprises high- throughput imaging sensor measurements and their automatic analysis.
Our approach's core is a novel machine learning-based growth model based on conditional generative adversarial networks.
arXiv Detail & Related papers (2021-05-17T13:00:01Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.