RB-Dust -- A Reference-based Dataset for Vision-based Dust Removal
- URL: http://arxiv.org/abs/2306.07244v1
- Date: Mon, 12 Jun 2023 17:09:24 GMT
- Title: RB-Dust -- A Reference-based Dataset for Vision-based Dust Removal
- Authors: Peter Buckel, Timo Oksanen, Thomas Dietmueller
- Abstract summary: We present the agriscapes RB-Dust dataset, which is named after its purpose of reference-based dust removal.
It is not possible to take pictures from the cabin during tillage, as this would cause shifts in the images.
We validated our dataset with contrast enhancement and image dehazing algorithms and analyzed the generalizability from recordings from the moving tractor.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dust in the agricultural landscape is a significant challenge and influences,
for example, the environmental perception of autonomous agricultural machines.
Image enhancement algorithms can be used to reduce dust. However, these require
dusty and dust-free images of the same environment for validation. In fact, to
date, there is no dataset that we are aware of that addresses this issue.
Therefore, we present the agriscapes RB-Dust dataset, which is named after its
purpose of reference-based dust removal. It is not possible to take pictures
from the cabin during tillage, as this would cause shifts in the images.
Because of this, we built a setup from which it is possible to take images from
a stationary position close to the passing tractor. The test setup was based on
a half-sided gate through which the tractor could drive. The field tests were
carried out on a farm in Bavaria, Germany, during tillage. During the field
tests, other parameters such as soil moisture and wind speed were controlled,
as these significantly affect dust development. We validated our dataset with
contrast enhancement and image dehazing algorithms and analyzed the
generalizability from recordings from the moving tractor. Finally, we
demonstrate the application of dust removal based on a high-level vision task,
such as person classification. Our empirical study confirms the validity of
RB-Dust for vision-based dust removal in agriculture.
Related papers
- Investigating the Segment Anything Foundation Model for Mapping Smallholder Agriculture Field Boundaries Without Training Labels [0.24966046892475396]
This study explores the Segment Anything Model (SAM) to delineate agricultural field boundaries in Bihar, India.
We evaluate SAM's performance across three model checkpoints, various input sizes, multi-date satellite images, and edge-enhanced imagery.
Using different input image sizes improves accuracy, with the most significant improvement observed when using multi-date satellite images.
arXiv Detail & Related papers (2024-07-01T23:06:02Z) - Detecting and Refining HiRISE Image Patches Obscured by Atmospheric Dust [0.0]
Mars suffers from frequent regional and local dust storms hampering this data-collection process.
Removing these images manually requires a large amount of manpower.
I design a pipeline that classifies and stores these dusty patches.
arXiv Detail & Related papers (2024-05-08T00:03:23Z) - CarPatch: A Synthetic Benchmark for Radiance Field Evaluation on Vehicle
Components [77.33782775860028]
We introduce CarPatch, a novel synthetic benchmark of vehicles.
In addition to a set of images annotated with their intrinsic and extrinsic camera parameters, the corresponding depth maps and semantic segmentation masks have been generated for each view.
Global and part-based metrics have been defined and used to evaluate, compare, and better characterize some state-of-the-art techniques.
arXiv Detail & Related papers (2023-07-24T11:59:07Z) - Transferring learned patterns from ground-based field imagery to predict
UAV-based imagery for crop and weed semantic segmentation in precision crop
farming [3.95486899327898]
We have developed a deep convolutional network that enables to predict both field and aerial images from UAVs for weed segmentation.
The network learning process is visualized by feature maps at shallow and deep layers.
The study shows that the developed deep convolutional neural network could be used to classify weeds from both field and aerial images.
arXiv Detail & Related papers (2022-10-20T19:25:06Z) - End-to-end deep learning for directly estimating grape yield from
ground-based imagery [53.086864957064876]
This study demonstrates the application of proximal imaging combined with deep learning for yield estimation in vineyards.
Three model architectures were tested: object detection, CNN regression, and transformer models.
The study showed the applicability of proximal imaging and deep learning for prediction of grapevine yield on a large scale.
arXiv Detail & Related papers (2022-08-04T01:34:46Z) - Deep Learning Eliminates Massive Dust Storms from Images of Tianwen-1 [24.25089331365282]
We propose an approach that reuses the image dehazing knowledge obtained on Earth to resolve the dust-removal problem on Mars.
Inspired by the haze formation process on Earth, we formulate a similar visual degradation process on clean images.
We train a deep model that inherently encodes dust irrelevant features and decodes them into dust-free images.
arXiv Detail & Related papers (2022-06-21T07:05:09Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - Unsupervised domain adaptation and super resolution on drone images for
autonomous dry herbage biomass estimation [14.666311628659072]
Herbage mass yield and composition estimation is an important tool for dairy farmers.
Deep learning algorithms offer a tempting alternative to the usual means of sward composition estimation.
This paper proposes to transfer knowledge learned on ground-level images to raw drone images in an unsupervised manner.
arXiv Detail & Related papers (2022-04-18T12:11:15Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - Agriculture-Vision: A Large Aerial Image Database for Agricultural
Pattern Analysis [110.30849704592592]
We present Agriculture-Vision: a large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns.
Each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel.
We annotate nine types of field anomaly patterns that are most important to farmers.
arXiv Detail & Related papers (2020-01-05T20:19:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.