Multimodal Dataset for Localization, Mapping and Crop Monitoring in
Citrus Tree Farms
- URL: http://arxiv.org/abs/2309.15332v2
- Date: Fri, 29 Sep 2023 01:43:49 GMT
- Title: Multimodal Dataset for Localization, Mapping and Crop Monitoring in
Citrus Tree Farms
- Authors: Hanzhe Teng, Yipeng Wang, Xiaoao Song, Konstantinos Karydis
- Abstract summary: The dataset offers stereo RGB images with depth information, as well as monochrome, near-infrared and thermal images.
The dataset comprises seven sequences collected in three fields of citrus trees.
It spans a total operation time of 1.7 hours, covers a distance of 7.5 km, and constitutes 1.3 TB of data.
- Score: 7.666806082770633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we introduce the CitrusFarm dataset, a comprehensive multimodal
sensory dataset collected by a wheeled mobile robot operating in agricultural
fields. The dataset offers stereo RGB images with depth information, as well as
monochrome, near-infrared and thermal images, presenting diverse spectral
responses crucial for agricultural research. Furthermore, it provides a range
of navigational sensor data encompassing wheel odometry, LiDAR, inertial
measurement unit (IMU), and GNSS with Real-Time Kinematic (RTK) as the
centimeter-level positioning ground truth. The dataset comprises seven
sequences collected in three fields of citrus trees, featuring various tree
species at different growth stages, distinctive planting patterns, as well as
varying daylight conditions. It spans a total operation time of 1.7 hours,
covers a distance of 7.5 km, and constitutes 1.3 TB of data. We anticipate that
this dataset can facilitate the development of autonomous robot systems
operating in agricultural tree environments, especially for localization,
mapping and crop monitoring tasks. Moreover, the rich sensing modalities
offered in this dataset can also support research in a range of robotics and
computer vision tasks, such as place recognition, scene understanding, object
detection and segmentation, and multimodal learning. The dataset, in
conjunction with related tools and resources, is made publicly available at
https://github.com/UCR-Robotics/Citrus-Farm-Dataset.
Related papers
- M3LEO: A Multi-Modal, Multi-Label Earth Observation Dataset Integrating Interferometric SAR and Multispectral Data [1.4053129774629076]
M3LEO is a multi-modal, multi-label Earth observation dataset.
It spans approximately 17M 4x4 km data chips from six diverse geographic regions.
arXiv Detail & Related papers (2024-06-06T16:30:41Z) - VBR: A Vision Benchmark in Rome [1.71787484850503]
This paper presents a vision and perception research dataset collected in Rome, featuring RGB data, 3D point clouds, IMU, and GPS data.
We introduce a new benchmark targeting visual odometry and SLAM, to advance the research in autonomous robotics and computer vision.
arXiv Detail & Related papers (2024-04-17T12:34:49Z) - MTMMC: A Large-Scale Real-World Multi-Modal Camera Tracking Benchmark [63.878793340338035]
Multi-target multi-camera tracking is a crucial task that involves identifying and tracking individuals over time using video streams from multiple cameras.
Existing datasets for this task are either synthetically generated or artificially constructed within a controlled camera network setting.
We present MTMMC, a real-world, large-scale dataset that includes long video sequences captured by 16 multi-modal cameras in two different environments.
arXiv Detail & Related papers (2024-03-29T15:08:37Z) - FoMo-Bench: a multi-modal, multi-scale and multi-task Forest Monitoring Benchmark for remote sensing foundation models [24.141443217910986]
We present the first unified Forest Monitoring Benchmark (FoMo-Bench)
FoMo-Bench consists of 15 diverse datasets encompassing satellite, aerial, and inventory data.
To further enhance the diversity of tasks and geographies represented in FoMo-Bench, we introduce a novel global dataset, TalloS.
arXiv Detail & Related papers (2023-12-15T09:49:21Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - Extended Agriculture-Vision: An Extension of a Large Aerial Image
Dataset for Agricultural Pattern Analysis [11.133807938044804]
We release an improved version of the Agriculture-Vision dataset (Chiu et al., 2020b)
We extend this dataset with the release of 3600 large, high-resolution (10cm/pixel), full-field, red-green-blue and near-infrared images for pre-training.
We demonstrate the usefulness of this data by benchmarking different contrastive learning approaches on both downstream classification and semantic segmentation tasks.
arXiv Detail & Related papers (2023-03-04T17:35:24Z) - Argoverse 2: Next Generation Datasets for Self-Driving Perception and
Forecasting [64.7364925689825]
Argoverse 2 (AV2) is a collection of three datasets for perception and forecasting research in the self-driving domain.
The Lidar dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose.
The Motion Forecasting dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene.
arXiv Detail & Related papers (2023-01-02T00:36:22Z) - Semantic Segmentation of Vegetation in Remote Sensing Imagery Using Deep
Learning [77.34726150561087]
We propose an approach for creating a multi-modal and large-temporal dataset comprised of publicly available Remote Sensing data.
We use Convolutional Neural Networks (CNN) models that are capable of separating different classes of vegetation.
arXiv Detail & Related papers (2022-09-28T18:51:59Z) - TRoVE: Transforming Road Scene Datasets into Photorealistic Virtual
Environments [84.6017003787244]
This work proposes a synthetic data generation pipeline to address the difficulties and domain-gaps present in simulated datasets.
We show that using annotations and visual cues from existing datasets, we can facilitate automated multi-modal data generation.
arXiv Detail & Related papers (2022-08-16T20:46:08Z) - A Survey on RGB-D Datasets [69.73803123972297]
This paper reviewed and categorized image datasets that include depth information.
We gathered 203 datasets that contain accessible data and grouped them into three categories: scene/objects, body, and medical.
arXiv Detail & Related papers (2022-01-15T05:35:19Z) - Agricultural Plant Cataloging and Establishment of a Data Framework from
UAV-based Crop Images by Computer Vision [4.0382342610484425]
We present a hands-on workflow for the automatized temporal and spatial identification and individualization of crop images from UAVs.
The presented approach improves analysis and interpretation of UAV data in agriculture significantly.
arXiv Detail & Related papers (2022-01-08T21:14:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.