Lessons from Deploying CropFollow++: Under-Canopy Agricultural Navigation with Keypoints
- URL: http://arxiv.org/abs/2404.17718v1
- Date: Fri, 26 Apr 2024 22:46:17 GMT
- Title: Lessons from Deploying CropFollow++: Under-Canopy Agricultural Navigation with Keypoints
- Authors: Arun N. Sivakumar, Mateus V. Gasparino, Michael McGuire, Vitor A. H. Higuti, M. Ugur Akcal, Girish Chowdhary,
- Abstract summary: We present a vision-based navigation system for under-canopy agricultural robots using semantic keypoints.
Our system, CropFollow++, introduces modular and interpretable perception architecture with a learned semantic keypoint representation.
- Score: 4.825377557319356
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a vision-based navigation system for under-canopy agricultural robots using semantic keypoints. Autonomous under-canopy navigation is challenging due to the tight spacing between the crop rows ($\sim 0.75$ m), degradation in RTK-GPS accuracy due to multipath error, and noise in LiDAR measurements from the excessive clutter. Our system, CropFollow++, introduces modular and interpretable perception architecture with a learned semantic keypoint representation. We deployed CropFollow++ in multiple under-canopy cover crop planting robots on a large scale (25 km in total) in various field conditions and we discuss the key lessons learned from this.
Related papers
- MetaCropFollow: Few-Shot Adaptation with Meta-Learning for Under-Canopy Navigation [4.923031976899536]
Keypoint-based visual navigation has been shown to perform well in under-canopy environments.
We train a base-learner that can quickly adapt to new conditions, enabling more robust navigation in low-data regimes.
arXiv Detail & Related papers (2024-11-21T12:58:09Z) - AdaCropFollow: Self-Supervised Online Adaptation for Visual Under-Canopy Navigation [31.214318150001947]
Under-canopy agricultural robots can enable various applications like precise monitoring, spraying, weeding, and plant manipulation tasks.
We propose a self-supervised online adaptation method for adapting the semantic keypoint representation using a visual foundational model, geometric prior, and pseudo labeling.
This can enable fully autonomous row-following capability in under-canopy robots across fields and crops without requiring human intervention.
arXiv Detail & Related papers (2024-10-16T09:52:38Z) - RoboSense: Large-scale Dataset and Benchmark for Egocentric Robot Perception and Navigation in Crowded and Unstructured Environments [62.5830455357187]
We setup an egocentric multi-sensor data collection platform based on 3 main types of sensors (Camera, LiDAR and Fisheye)
A large-scale multimodal dataset is constructed, named RoboSense, to facilitate egocentric robot perception.
arXiv Detail & Related papers (2024-08-28T03:17:40Z) - Hyp2Nav: Hyperbolic Planning and Curiosity for Crowd Navigation [58.574464340559466]
We advocate for hyperbolic learning to enable crowd navigation and we introduce Hyp2Nav.
Hyp2Nav leverages the intrinsic properties of hyperbolic geometry to better encode the hierarchical nature of decision-making processes in navigation tasks.
We propose a hyperbolic policy model and a hyperbolic curiosity module that results in effective social navigation, best success rates, and returns across multiple simulation settings.
arXiv Detail & Related papers (2024-07-18T14:40:33Z) - Control Transformer: Robot Navigation in Unknown Environments through
PRM-Guided Return-Conditioned Sequence Modeling [0.0]
We propose Control Transformer that models return-conditioned sequences from low-level policies guided by a sampling-based Probabilistic Roadmap planner.
We show that Control Transformer can successfully navigate through mazes and transfer to unknown environments.
arXiv Detail & Related papers (2022-11-11T18:44:41Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Polyline Based Generative Navigable Space Segmentation for Autonomous
Visual Navigation [57.3062528453841]
We propose a representation-learning-based framework to enable robots to learn the navigable space segmentation in an unsupervised manner.
We show that the proposed PSV-Nets can learn the visual navigable space with high accuracy, even without any single label.
arXiv Detail & Related papers (2021-10-29T19:50:48Z) - Towards Autonomous Crop-Agnostic Visual Navigation in Arable Fields [2.6323812778809907]
We introduce a vision-based navigation scheme which is able to reliably guide the robot through row-crop fields.
With the help of a novel crop-row detection and a novel crop-row switching technique, our navigation scheme can be deployed in a wide range of fields.
arXiv Detail & Related papers (2021-09-24T12:54:42Z) - Large-scale Autonomous Flight with Real-time Semantic SLAM under Dense
Forest Canopy [48.51396198176273]
We propose an integrated system that can perform large-scale autonomous flights and real-time semantic mapping in challenging under-canopy environments.
We detect and model tree trunks and ground planes from LiDAR data, which are associated across scans and used to constrain robot poses as well as tree trunk models.
A drift-compensation mechanism is designed to minimize the odometry drift using semantic SLAM outputs in real time, while maintaining planner optimality and controller stability.
arXiv Detail & Related papers (2021-09-14T07:24:53Z) - Learned Visual Navigation for Under-Canopy Agricultural Robots [9.863749490361338]
We describe a system for visually guided autonomous navigation of under-canopy farm robots.
Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average.
arXiv Detail & Related papers (2021-07-06T17:59:02Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.