Monitoring crop phenology with street-level imagery using computer
vision
- URL: http://arxiv.org/abs/2112.09190v1
- Date: Thu, 16 Dec 2021 20:36:45 GMT
- Title: Monitoring crop phenology with street-level imagery using computer
vision
- Authors: Rapha\"el d'Andrimont, Momchil Yordanov, Laura Martinez-Sanchez,
Marijn van der Velde
- Abstract summary: We present a framework to collect and extract crop type and phenological information from street level imagery using computer vision.
During the 2018 growing season, high definition pictures were captured with side-looking action cameras in the Flevoland province of the Netherlands.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Street-level imagery holds a significant potential to scale-up in-situ data
collection. This is enabled by combining the use of cheap high quality cameras
with recent advances in deep learning compute solutions to derive relevant
thematic information. We present a framework to collect and extract crop type
and phenological information from street level imagery using computer vision.
During the 2018 growing season, high definition pictures were captured with
side-looking action cameras in the Flevoland province of the Netherlands. Each
month from March to October, a fixed 200-km route was surveyed collecting one
picture per second resulting in a total of 400,000 geo-tagged pictures. At 220
specific parcel locations detailed on the spot crop phenology observations were
recorded for 17 crop types. Furthermore, the time span included specific
pre-emergence parcel stages, such as differently cultivated bare soil for
spring and summer crops as well as post-harvest cultivation practices, e.g.
green manuring and catch crops. Classification was done using TensorFlow with a
well-known image recognition model, based on transfer learning with
convolutional neural networks (MobileNet). A hypertuning methodology was
developed to obtain the best performing model among 160 models. This best model
was applied on an independent inference set discriminating crop type with a
Macro F1 score of 88.1% and main phenological stage at 86.9% at the parcel
level. Potential and caveats of the approach along with practical
considerations for implementation and improvement are discussed. The proposed
framework speeds up high quality in-situ data collection and suggests avenues
for massive data collection via automated classification using computer vision.
Related papers
- Data Augmentation via Latent Diffusion for Saliency Prediction [67.88936624546076]
Saliency prediction models are constrained by the limited diversity and quantity of labeled data.
We propose a novel data augmentation method for deep saliency prediction that edits natural images while preserving the complexity and variability of real-world scenes.
arXiv Detail & Related papers (2024-09-11T14:36:24Z) - Generating Diverse Agricultural Data for Vision-Based Farming Applications [74.79409721178489]
This model is capable of simulating distinct growth stages of plants, diverse soil conditions, and randomized field arrangements under varying lighting conditions.
Our dataset includes 12,000 images with semantic labels, offering a comprehensive resource for computer vision tasks in precision agriculture.
arXiv Detail & Related papers (2024-03-27T08:42:47Z) - Transferring learned patterns from ground-based field imagery to predict
UAV-based imagery for crop and weed semantic segmentation in precision crop
farming [3.95486899327898]
We have developed a deep convolutional network that enables to predict both field and aerial images from UAVs for weed segmentation.
The network learning process is visualized by feature maps at shallow and deep layers.
The study shows that the developed deep convolutional neural network could be used to classify weeds from both field and aerial images.
arXiv Detail & Related papers (2022-10-20T19:25:06Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Enlisting 3D Crop Models and GANs for More Data Efficient and
Generalizable Fruit Detection [0.0]
We propose a method that generates agricultural images from a synthetic 3D crop model domain into real world crop domains.
The method uses a semantically constrained GAN (generative adversarial network) to preserve the fruit position and geometry.
Incremental training experiments in vineyard grape detection tasks show that the images generated from our method can significantly speed the domain process.
arXiv Detail & Related papers (2021-08-30T16:11:59Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Fruit Quality and Defect Image Classification with Conditional GAN Data
Augmentation [2.6424021470496672]
We suggest a machine learning pipeline that combines the ideas of fine-tuning, transfer learning, and generative model-based training data augmentation.
We find that appending a 4096 neuron fully connected to the convolutional layers leads to an image classification accuracy of 83.77%.
We then train a Conditional Generative Adversarial Network on the training data for 2000 epochs, and it learns to generate relatively realistic images.
arXiv Detail & Related papers (2021-04-12T17:13:05Z) - A New Mask R-CNN Based Method for Improved Landslide Detection [54.7905160534631]
This paper presents a novel method of landslide detection by exploiting the Mask R-CNN capability of identifying an object layout.
A data set of 160 elements is created containing landslide and non-landslide images.
The proposed algorithm can be potentially useful for land use planners and policy makers of hilly areas.
arXiv Detail & Related papers (2020-10-04T07:46:37Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.