Learned Visual Navigation for Under-Canopy Agricultural Robots
- URL: http://arxiv.org/abs/2107.02792v1
- Date: Tue, 6 Jul 2021 17:59:02 GMT
- Title: Learned Visual Navigation for Under-Canopy Agricultural Robots
- Authors: Arun Narenthiran Sivakumar and Sahil Modi and Mateus Valverde
Gasparino and Che Ellis and Andres Eduardo Baquero Velasquez and Girish
Chowdhary and Saurabh Gupta
- Abstract summary: We describe a system for visually guided autonomous navigation of under-canopy farm robots.
Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average.
- Score: 9.863749490361338
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We describe a system for visually guided autonomous navigation of
under-canopy farm robots. Low-cost under-canopy robots can drive between crop
rows under the plant canopy and accomplish tasks that are infeasible for
over-the-canopy drones or larger agricultural equipment. However, autonomously
navigating them under the canopy presents a number of challenges: unreliable
GPS and LiDAR, high cost of sensing, challenging farm terrain, clutter due to
leaves and weeds, and large variability in appearance over the season and
across crop types. We address these challenges by building a modular system
that leverages machine learning for robust and generalizable perception from
monocular RGB images from low-cost cameras, and model predictive control for
accurate control in challenging terrain. Our system, CropFollow, is able to
autonomously drive 485 meters per intervention on average, outperforming a
state-of-the-art LiDAR based system (286 meters per intervention) in extensive
field testing spanning over 25 km.
Related papers
- Plantation Monitoring Using Drone Images: A Dataset and Performance Review [2.4936576553283287]
Small, low cost drones equipped with an RGB camera can capture high-resolution images of agricultural fields.
Existing methods of automated plantation monitoring are mostly based on satellite images.
We propose an automated system for plantation health monitoring using drone images.
arXiv Detail & Related papers (2025-02-12T09:21:16Z) - Lessons from Deploying CropFollow++: Under-Canopy Agricultural Navigation with Keypoints [4.825377557319356]
We present a vision-based navigation system for under-canopy agricultural robots using semantic keypoints.
Our system, CropFollow++, introduces modular and interpretable perception architecture with a learned semantic keypoint representation.
arXiv Detail & Related papers (2024-04-26T22:46:17Z) - Multi-model fusion for Aerial Vision and Dialog Navigation based on
human attention aids [69.98258892165767]
We present an aerial navigation task for the 2023 ICCV Conversation History.
We propose an effective method of fusion training of Human Attention Aided Transformer model (HAA-Transformer) and Human Attention Aided LSTM (HAA-LSTM) models.
arXiv Detail & Related papers (2023-08-27T10:32:52Z) - Fast Traversability Estimation for Wild Visual Navigation [17.015268056925745]
We propose Wild Visual Navigation (WVN), an online self-supervised learning system for traversability estimation.
The system is able to continuously adapt from a short human demonstration in the field.
We demonstrate the advantages of our approach with experiments and ablation studies in challenging environments in forests, parks, and grasslands.
arXiv Detail & Related papers (2023-05-15T10:19:30Z) - Ultra-low Power Deep Learning-based Monocular Relative Localization
Onboard Nano-quadrotors [64.68349896377629]
This work presents a novel autonomous end-to-end system that addresses the monocular relative localization, through deep neural networks (DNNs), of two peer nano-drones.
To cope with the ultra-constrained nano-drone platform, we propose a vertically-integrated framework, including dataset augmentation, quantization, and system optimizations.
Experimental results show that our DNN can precisely localize a 10cm-size target nano-drone by employing only low-resolution monochrome images, up to 2m distance.
arXiv Detail & Related papers (2023-03-03T14:14:08Z) - How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle
Traversability [7.305104984234086]
Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains.
We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback.
arXiv Detail & Related papers (2022-09-22T05:18:35Z) - VPAIR -- Aerial Visual Place Recognition and Localization in Large-scale
Outdoor Environments [49.82314641876602]
We present a new dataset named VPAIR.
The dataset was recorded on board a light aircraft flying at an altitude of more than 300 meters above ground.
The dataset covers a more than one hundred kilometers long trajectory over various types of challenging landscapes.
arXiv Detail & Related papers (2022-05-23T18:50:08Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Towards Autonomous Crop-Agnostic Visual Navigation in Arable Fields [2.6323812778809907]
We introduce a vision-based navigation scheme which is able to reliably guide the robot through row-crop fields.
With the help of a novel crop-row detection and a novel crop-row switching technique, our navigation scheme can be deployed in a wide range of fields.
arXiv Detail & Related papers (2021-09-24T12:54:42Z) - Depth Sensing Beyond LiDAR Range [84.19507822574568]
We propose a novel three-camera system that utilizes small field of view cameras.
Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration.
It can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances.
arXiv Detail & Related papers (2020-04-07T00:09:51Z) - Active Perception with A Monocular Camera for Multiscopic Vision [50.370074098619185]
We design a multiscopic vision system that utilizes a low-cost monocular RGB camera to acquire accurate depth estimation for robotic applications.
Unlike multi-view stereo with images captured at unconstrained camera poses, the proposed system actively controls a robot arm with a mounted camera to capture a sequence of images in horizontally or vertically aligned positions with the same parallax.
arXiv Detail & Related papers (2020-01-22T08:46:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.