Lessons from Deploying CropFollow++: Under-Canopy Agricultural Navigation with Keypoints
- URL: http://arxiv.org/abs/2404.17718v1
- Date: Fri, 26 Apr 2024 22:46:17 GMT
- Title: Lessons from Deploying CropFollow++: Under-Canopy Agricultural Navigation with Keypoints
- Authors: Arun N. Sivakumar, Mateus V. Gasparino, Michael McGuire, Vitor A. H. Higuti, M. Ugur Akcal, Girish Chowdhary,
- Abstract summary: We present a vision-based navigation system for under-canopy agricultural robots using semantic keypoints.
Our system, CropFollow++, introduces modular and interpretable perception architecture with a learned semantic keypoint representation.
- Score: 4.825377557319356
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a vision-based navigation system for under-canopy agricultural robots using semantic keypoints. Autonomous under-canopy navigation is challenging due to the tight spacing between the crop rows ($\sim 0.75$ m), degradation in RTK-GPS accuracy due to multipath error, and noise in LiDAR measurements from the excessive clutter. Our system, CropFollow++, introduces modular and interpretable perception architecture with a learned semantic keypoint representation. We deployed CropFollow++ in multiple under-canopy cover crop planting robots on a large scale (25 km in total) in various field conditions and we discuss the key lessons learned from this.
Related papers
- Hyp2Nav: Hyperbolic Planning and Curiosity for Crowd Navigation [58.574464340559466]
We advocate for hyperbolic learning to enable crowd navigation and we introduce Hyp2Nav.
Hyp2Nav leverages the intrinsic properties of hyperbolic geometry to better encode the hierarchical nature of decision-making processes in navigation tasks.
We propose a hyperbolic policy model and a hyperbolic curiosity module that results in effective social navigation, best success rates, and returns across multiple simulation settings.
arXiv Detail & Related papers (2024-07-18T14:40:33Z) - Control Transformer: Robot Navigation in Unknown Environments through
PRM-Guided Return-Conditioned Sequence Modeling [0.0]
We propose Control Transformer that models return-conditioned sequences from low-level policies guided by a sampling-based Probabilistic Roadmap planner.
We show that Control Transformer can successfully navigate through mazes and transfer to unknown environments.
arXiv Detail & Related papers (2022-11-11T18:44:41Z) - How Does It Feel? Self-Supervised Costmap Learning for Off-Road Vehicle
Traversability [7.305104984234086]
Estimating terrain traversability in off-road environments requires reasoning about complex interaction dynamics between the robot and these terrains.
We propose a method that learns to predict traversability costmaps by combining exteroceptive environmental information with proprioceptive terrain interaction feedback.
arXiv Detail & Related papers (2022-09-22T05:18:35Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Coupling Vision and Proprioception for Navigation of Legged Robots [65.59559699815512]
We exploit the complementary strengths of vision and proprioception to achieve point goal navigation in a legged robot.
We show superior performance compared to wheeled robot (LoCoBot) baselines.
We also show the real-world deployment of our system on a quadruped robot with onboard sensors and compute.
arXiv Detail & Related papers (2021-12-03T18:59:59Z) - Polyline Based Generative Navigable Space Segmentation for Autonomous
Visual Navigation [57.3062528453841]
We propose a representation-learning-based framework to enable robots to learn the navigable space segmentation in an unsupervised manner.
We show that the proposed PSV-Nets can learn the visual navigable space with high accuracy, even without any single label.
arXiv Detail & Related papers (2021-10-29T19:50:48Z) - Towards Autonomous Crop-Agnostic Visual Navigation in Arable Fields [2.6323812778809907]
We introduce a vision-based navigation scheme which is able to reliably guide the robot through row-crop fields.
With the help of a novel crop-row detection and a novel crop-row switching technique, our navigation scheme can be deployed in a wide range of fields.
arXiv Detail & Related papers (2021-09-24T12:54:42Z) - Large-scale Autonomous Flight with Real-time Semantic SLAM under Dense
Forest Canopy [48.51396198176273]
We propose an integrated system that can perform large-scale autonomous flights and real-time semantic mapping in challenging under-canopy environments.
We detect and model tree trunks and ground planes from LiDAR data, which are associated across scans and used to constrain robot poses as well as tree trunk models.
A drift-compensation mechanism is designed to minimize the odometry drift using semantic SLAM outputs in real time, while maintaining planner optimality and controller stability.
arXiv Detail & Related papers (2021-09-14T07:24:53Z) - Navigational Path-Planning For All-Terrain Autonomous Agricultural Robot [0.0]
This report compares novel algorithms for autonomous navigation of farmlands.
High-resolution grid map representation is taken into consideration specific to Indian environments.
Results proved the applicability of the algorithms for autonomous field navigation and feasibility with robotic path planning.
arXiv Detail & Related papers (2021-09-05T07:29:13Z) - Learned Visual Navigation for Under-Canopy Agricultural Robots [9.863749490361338]
We describe a system for visually guided autonomous navigation of under-canopy farm robots.
Our system, CropFollow, is able to autonomously drive 485 meters per intervention on average.
arXiv Detail & Related papers (2021-07-06T17:59:02Z) - BADGR: An Autonomous Self-Supervised Learning-Based Navigation System [158.6392333480079]
BadGR is an end-to-end learning-based mobile robot navigation system.
It can be trained with self-supervised off-policy data gathered in real-world environments.
BadGR can navigate in real-world urban and off-road environments with geometrically distracting obstacles.
arXiv Detail & Related papers (2020-02-13T18:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.