Transfer Learning of Photometric Phenotypes in Agriculture Using
Metadata
- URL: http://arxiv.org/abs/2004.00303v1
- Date: Wed, 1 Apr 2020 09:24:34 GMT
- Title: Transfer Learning of Photometric Phenotypes in Agriculture Using
Metadata
- Authors: Dan Halbersberg, Aharon Bar Hillel, Shon Mendelson, Daniel Koster,
Lena Karol, and Boaz Lerner
- Abstract summary: estimation of photometric plant phenotypes (e.g., hue, shine, chroma) in field conditions is important for decisions on the expected yield quality, fruit ripeness, and need for further breeding.
We combine the image and metadata regarding capturing conditions embedded into a network, enabling more accurate estimation and transfer between different conditions.
Compared to a state-of-the-art deep CNN and a human expert, metadata embedding improves the estimation of the tomato's hue and chroma.
- Score: 6.034634994180246
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimation of photometric plant phenotypes (e.g., hue, shine, chroma) in
field conditions is important for decisions on the expected yield quality,
fruit ripeness, and need for further breeding. Estimating these from images is
difficult due to large variances in lighting conditions, shadows, and sensor
properties. We combine the image and metadata regarding capturing conditions
embedded into a network, enabling more accurate estimation and transfer between
different conditions. Compared to a state-of-the-art deep CNN and a human
expert, metadata embedding improves the estimation of the tomato's hue and
chroma.
Related papers
- Data Augmentation via Latent Diffusion for Saliency Prediction [67.88936624546076]
Saliency prediction models are constrained by the limited diversity and quantity of labeled data.
We propose a novel data augmentation method for deep saliency prediction that edits natural images while preserving the complexity and variability of real-world scenes.
arXiv Detail & Related papers (2024-09-11T14:36:24Z) - Intrinsic Image Diffusion for Indoor Single-view Material Estimation [55.276815106443976]
We present Intrinsic Image Diffusion, a generative model for appearance decomposition of indoor scenes.
Given a single input view, we sample multiple possible material explanations represented as albedo, roughness, and metallic maps.
Our method produces significantly sharper, more consistent, and more detailed materials, outperforming state-of-the-art methods by $1.5dB$ on PSNR and by $45%$ better FID score on albedo prediction.
arXiv Detail & Related papers (2023-12-19T15:56:19Z) - Data-driven Crop Growth Simulation on Time-varying Generated Images
using Multi-conditional Generative Adversarial Networks [2.513679466277441]
We present a two-stage framework consisting first of an image prediction model and second of a growth estimation model.
The image prediction model is a conditional Wasserstein generative adversarial network (CWGAN)
In the generator of this model, conditional batch normalization (CBN) is used to integrate different conditions along with the input image.
These images are used by the second part of the framework for plant phenotyping by deriving plant-specific traits.
arXiv Detail & Related papers (2023-12-06T11:54:50Z) - Geometric Data Augmentations to Mitigate Distribution Shifts in Pollen
Classification from Microscopic Images [4.545340728210854]
We leverage the domain knowledge that geometric features are highly important for accurate pollen identification.
We introduce two novel geometric image augmentation techniques to significantly narrow the accuracy gap between the model performance on the train and test datasets.
arXiv Detail & Related papers (2023-11-18T10:35:18Z) - Generative models-based data labeling for deep networks regression:
application to seed maturity estimation from UAV multispectral images [3.6868861317674524]
Monitoring seed maturity is an increasing challenge in agriculture due to climate change and more restrictive practices.
Traditional methods are based on limited sampling in the field and analysis in laboratory.
We propose a method for estimating parsley seed maturity using multispectral UAV imagery, with a new approach for automatic data labeling.
arXiv Detail & Related papers (2022-08-09T09:06:51Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - An Applied Deep Learning Approach for Estimating Soybean Relative
Maturity from UAV Imagery to Aid Plant Breeding Decisions [7.4022258821325115]
We develop a robust and automatic approach for estimating the relative maturity of soybeans using a time series of UAV images.
An end-to-end hybrid model combining Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) is proposed to extract features.
Results suggest the effectiveness of our proposed CNN-LSTM model compared to the local regression method.
arXiv Detail & Related papers (2021-08-02T14:53:58Z) - Using depth information and colour space variations for improving
outdoor robustness for instance segmentation of cabbage [62.997667081978825]
This research focuses on improving instance segmentation of field crops under varying environmental conditions.
The influence of depth information and different colour space representations were analysed.
Results showed that depth combined with colour information leads to a segmentation accuracy increase of 7.1%.
arXiv Detail & Related papers (2021-03-31T09:19:12Z) - A Robust Illumination-Invariant Camera System for Agricultural
Applications [7.349727826230863]
Object detection and semantic segmentation are two of the most widely adopted deep learning algorithms in agricultural applications.
We present a high throughput robust active lighting-based camera system that generates consistent images in all lighting conditions.
On average, deep nets for object detection trained on consistent data required nearly four times less data to achieve similar level of accuracy.
arXiv Detail & Related papers (2021-01-06T18:50:53Z) - Automatic image-based identification and biomass estimation of
invertebrates [70.08255822611812]
Time-consuming sorting and identification of taxa pose strong limitations on how many insect samples can be processed.
We propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology.
We use state-of-the-art Resnet-50 and InceptionV3 CNNs for the classification task.
arXiv Detail & Related papers (2020-02-05T21:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.