A Robust Illumination-Invariant Camera System for Agricultural
Applications
- URL: http://arxiv.org/abs/2101.02190v1
- Date: Wed, 6 Jan 2021 18:50:53 GMT
- Title: A Robust Illumination-Invariant Camera System for Agricultural
Applications
- Authors: Abhisesh Silwal, Tanvir Parhar, Francisco Yandun and George Kantor
- Abstract summary: Object detection and semantic segmentation are two of the most widely adopted deep learning algorithms in agricultural applications.
We present a high throughput robust active lighting-based camera system that generates consistent images in all lighting conditions.
On average, deep nets for object detection trained on consistent data required nearly four times less data to achieve similar level of accuracy.
- Score: 7.349727826230863
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Object detection and semantic segmentation are two of the most widely adopted
deep learning algorithms in agricultural applications. One of the major sources
of variability in image quality acquired in the outdoors for such tasks is
changing lighting condition that can alter the appearance of the objects or the
contents of the entire image. While transfer learning and data augmentation to
some extent reduce the need for large amount of data to train deep neural
networks, the large variety of cultivars and the lack of shared datasets in
agriculture makes wide-scale field deployments difficult. In this paper, we
present a high throughput robust active lighting-based camera system that
generates consistent images in all lighting conditions. We detail experiments
that show the consistency in images quality leading to relatively fewer images
to train deep neural networks for the task of object detection. We further
present results from field experiment under extreme lighting conditions where
images without active lighting significantly lack to provide consistent
results. The experimental results show that on average, deep nets for object
detection trained on consistent data required nearly four times less data to
achieve similar level of accuracy. This proposed work could potentially provide
pragmatic solutions to computer vision needs in agriculture.
Related papers
- LMHaze: Intensity-aware Image Dehazing with a Large-scale Multi-intensity Real Haze Dataset [14.141433473509826]
We present LMHaze, a large-scale, high-quality real-world dataset.
LMHaze comprises paired hazy and haze-free images captured in diverse indoor and outdoor environments.
To better handle images with different haze intensities, we propose a mixture-of-experts model based on Mamba.
arXiv Detail & Related papers (2024-10-21T15:20:02Z) - Semi-Self-Supervised Domain Adaptation: Developing Deep Learning Models with Limited Annotated Data for Wheat Head Segmentation [0.10923877073891444]
We introduce a semi-self-supervised domain adaptation technique based on deep convolutional neural networks with a probabilistic diffusion process.
We develop a two-branch convolutional encoder-decoder model architecture that uses both synthesized image-mask pairs and unannotated images.
The proposed model achieved a Dice score of 80.7% on an internal test dataset and a Dice score of 64.8% on an external test set.
arXiv Detail & Related papers (2024-05-12T04:35:49Z) - Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Does Thermal data make the detection systems more reliable? [1.2891210250935146]
We propose a comprehensive detection system based on a multimodal-collaborative framework.
This framework learns from both RGB (from visual cameras) and thermal (from Infrared cameras) data.
Our empirical results show that while the improvement in accuracy is nominal, the value lies in challenging and extremely difficult edge cases.
arXiv Detail & Related papers (2021-11-09T15:04:34Z) - Exploring Low-light Object Detection Techniques [0.456877715768796]
We look at which image enhancement algorithm is more suited for object detection tasks.
Specifically, we look at basic histogram equalization techniques and unpaired image translation techniques.
We conclude by comparing all results, calculating mean average precisions (mAP) and giving some directions for future work.
arXiv Detail & Related papers (2021-07-30T01:11:11Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Stereo Matching by Self-supervision of Multiscopic Vision [65.38359887232025]
We propose a new self-supervised framework for stereo matching utilizing multiple images captured at aligned camera positions.
A cross photometric loss, an uncertainty-aware mutual-supervision loss, and a new smoothness loss are introduced to optimize the network.
Our model obtains better disparity maps than previous unsupervised methods on the KITTI dataset.
arXiv Detail & Related papers (2021-04-09T02:58:59Z) - Factors of Influence for Transfer Learning across Diverse Appearance
Domains and Task Types [50.1843146606122]
A simple form of transfer learning is common in current state-of-the-art computer vision models.
Previous systematic studies of transfer learning have been limited and the circumstances in which it is expected to work are not fully understood.
In this paper we carry out an extensive experimental exploration of transfer learning across vastly different image domains.
arXiv Detail & Related papers (2021-03-24T16:24:20Z) - Image Augmentation for Multitask Few-Shot Learning: Agricultural Domain
Use-Case [0.0]
This paper challenges small and imbalanced datasets based on the example of a plant phenomics domain.
We introduce an image augmentation framework, which enables us to extremely enlarge the number of training samples.
We prove that our augmentation method increases model performance when only a few training samples are available.
arXiv Detail & Related papers (2021-02-24T14:08:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.