Using depth information and colour space variations for improving
outdoor robustness for instance segmentation of cabbage
- URL: http://arxiv.org/abs/2103.16923v1
- Date: Wed, 31 Mar 2021 09:19:12 GMT
- Title: Using depth information and colour space variations for improving
outdoor robustness for instance segmentation of cabbage
- Authors: Nils L\"uling, David Reiser, Alexander Stana, H.W. Griepentrog
- Abstract summary: This research focuses on improving instance segmentation of field crops under varying environmental conditions.
The influence of depth information and different colour space representations were analysed.
Results showed that depth combined with colour information leads to a segmentation accuracy increase of 7.1%.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image-based yield detection in agriculture could raiseharvest efficiency and
cultivation performance of farms. Following this goal, this research focuses on
improving instance segmentation of field crops under varying environmental
conditions. Five data sets of cabbage plants were recorded under varying
lighting outdoor conditions. The images were acquired using a commercial mono
camera. Additionally, depth information was generated out of the image stream
with Structure-from-Motion (SfM). A Mask R-CNN was used to detect and segment
the cabbage heads. The influence of depth information and different colour
space representations were analysed. The results showed that depth combined
with colour information leads to a segmentation accuracy increase of 7.1%. By
describing colour information by colour spaces using light and saturation
information combined with depth information, additional segmentation
improvements of 16.5% could be reached. The CIELAB colour space combined with a
depth information layer showed the best results achieving a mean average
precision of 75.
Related papers
- Depth-based Privileged Information for Boosting 3D Human Pose Estimation on RGB [48.31210455404533]
Heatmap-based 3D pose estimator is able to hallucinate depth information from the RGB frames given at inference time.
depth information is used exclusively during training by enforcing our RGB-based hallucination network to learn similar features to a backbone pre-trained only on depth data.
arXiv Detail & Related papers (2024-09-17T11:59:34Z) - Data Augmentation via Latent Diffusion for Saliency Prediction [67.88936624546076]
Saliency prediction models are constrained by the limited diversity and quantity of labeled data.
We propose a novel data augmentation method for deep saliency prediction that edits natural images while preserving the complexity and variability of real-world scenes.
arXiv Detail & Related papers (2024-09-11T14:36:24Z) - Precision Agriculture: Crop Mapping using Machine Learning and Sentinel-2 Satellite Imagery [5.914742040076052]
This study employs deep learning and pixel-based machine learning methods to accurately segment lavender fields for precision agriculture.
Our fine-tuned final model, a U-Net architecture, can achieve a Dice coefficient of 0.8324.
arXiv Detail & Related papers (2023-11-25T20:26:11Z) - Depth Estimation from a Single Optical Encoded Image using a Learned
Colored-Coded Aperture [18.830374973687416]
State-of-the-art approaches improve the discrimination between different depths by introducing a binary-coded aperture (CA) in the lens aperture.
Color-coded apertures (CCA) can also produce color misalignment in the captured image which can be utilized to estimate disparity.
We propose a CCA with a greater number of color filters and richer spectral information to optically encode relevant depth information in a single snapshot.
arXiv Detail & Related papers (2023-09-14T21:30:55Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - Supervised learning for crop/weed classification based on color and
texture features [0.0]
This paper investigates the use of color and texture features for discrimination of Soybean crops and weeds.
Experiment was carried out on image dataset of soybean crop, obtained from an unmanned aerial vehicle (UAV)
arXiv Detail & Related papers (2021-06-19T22:31:54Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - A Robust Illumination-Invariant Camera System for Agricultural
Applications [7.349727826230863]
Object detection and semantic segmentation are two of the most widely adopted deep learning algorithms in agricultural applications.
We present a high throughput robust active lighting-based camera system that generates consistent images in all lighting conditions.
On average, deep nets for object detection trained on consistent data required nearly four times less data to achieve similar level of accuracy.
arXiv Detail & Related papers (2021-01-06T18:50:53Z) - Transfer Learning of Photometric Phenotypes in Agriculture Using
Metadata [6.034634994180246]
estimation of photometric plant phenotypes (e.g., hue, shine, chroma) in field conditions is important for decisions on the expected yield quality, fruit ripeness, and need for further breeding.
We combine the image and metadata regarding capturing conditions embedded into a network, enabling more accurate estimation and transfer between different conditions.
Compared to a state-of-the-art deep CNN and a human expert, metadata embedding improves the estimation of the tomato's hue and chroma.
arXiv Detail & Related papers (2020-04-01T09:24:34Z) - Agriculture-Vision: A Large Aerial Image Database for Agricultural
Pattern Analysis [110.30849704592592]
We present Agriculture-Vision: a large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns.
Each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel.
We annotate nine types of field anomaly patterns that are most important to farmers.
arXiv Detail & Related papers (2020-01-05T20:19:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.