Embedded Vision for Self-Driving on Forest Roads
- URL: http://arxiv.org/abs/2105.13754v1
- Date: Thu, 27 May 2021 09:05:08 GMT
- Title: Embedded Vision for Self-Driving on Forest Roads
- Authors: Sorin Grigorescu, Mihai Zaha, Bogdan Trasnea and Cosmin Ginerica
- Abstract summary: AMTU is a robotic system designed to autonomously navigate off-road terrain and inspect if any deforestation or damage occurred along tracked route.
AMTU's core component is its embedded vision module, optimized for real-time environment perception.
We show experimental results on the test track of our research facility.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Forest roads in Romania are unique natural wildlife sites used for recreation
by countless tourists. In order to protect and maintain these roads, we propose
RovisLab AMTU (Autonomous Mobile Test Unit), which is a robotic system designed
to autonomously navigate off-road terrain and inspect if any deforestation or
damage occurred along tracked route. AMTU's core component is its embedded
vision module, optimized for real-time environment perception. For achieving a
high computation speed, we use a learning system to train a multi-task Deep
Neural Network (DNN) for scene and instance segmentation of objects, while the
keypoints required for simultaneous localization and mapping are calculated
using a handcrafted FAST feature detector and the Lucas-Kanade tracking
algorithm. Both the DNN and the handcrafted backbone are run in parallel on the
GPU of an NVIDIA AGX Xavier board. We show experimental results on the test
track of our research facility.
Related papers
- Spatial Retrieval Augmented Autonomous Driving [81.39665750557526]
Existing autonomous driving systems rely on onboard sensors for environmental perception.<n>We propose the spatial retrieval paradigm, introducing offline retrieved geographic images as an additional input.<n>We will open-source dataset curation code, data, and benchmarks for further study of this new autonomous driving paradigm.
arXiv Detail & Related papers (2025-12-07T14:40:49Z) - Vision-Based Perception for Autonomous Vehicles in Off-Road Environment Using Deep Learning [0.27412662946127764]
Low-latency intelligent systems are required for autonomous driving on non-uniform terrain in open-pit mines and developing countries.<n>This work proposes a perception system for autonomous vehicles on unpaved roads and off-road environments, capable of navigating rough terrain without a predefined trail.<n>We investigated applying deep learning to detect drivable regions without explicit track boundaries, studied algorithm behavior under visibility impairment, and evaluated field tests with real-time semantic segmentation.
arXiv Detail & Related papers (2025-09-20T03:34:07Z) - Leveraging GNSS and Onboard Visual Data from Consumer Vehicles for Robust Road Network Estimation [18.236615392921273]
This paper addresses the challenge of road graph construction for autonomous vehicles.
We propose using global navigation satellite system (GNSS) traces and basic image data acquired from these standard sensors in consumer vehicles.
We exploit the spatial information in the data by framing the problem as a road centerline semantic segmentation task using a convolutional neural network.
arXiv Detail & Related papers (2024-08-03T02:57:37Z) - Geo-locating Road Objects using Inverse Haversine Formula with NVIDIA
Driveworks [0.7428236410246181]
This paper introduces a methodology to geolocate road objects using a monocular camera.
We use the Centimeter Positioning Service (CPOS) and the inverse Haversine formula to geo-locate road objects accurately.
arXiv Detail & Related papers (2024-01-15T10:38:07Z) - Neural Implicit Dense Semantic SLAM [83.04331351572277]
We propose a novel RGBD vSLAM algorithm that learns a memory-efficient, dense 3D geometry, and semantic segmentation of an indoor scene in an online manner.
Our pipeline combines classical 3D vision-based tracking and loop closing with neural fields-based mapping.
Our proposed algorithm can greatly enhance scene perception and assist with a range of robot control problems.
arXiv Detail & Related papers (2023-04-27T23:03:52Z) - Deep Learning Computer Vision Algorithms for Real-time UAVs On-board
Camera Image Processing [77.34726150561087]
This paper describes how advanced deep learning based computer vision algorithms are applied to enable real-time on-board sensor processing for small UAVs.
All algorithms have been developed using state-of-the-art image processing methods based on deep neural networks.
arXiv Detail & Related papers (2022-11-02T11:10:42Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - WayFAST: Traversability Predictive Navigation for Field Robots [5.914664791853234]
We present a self-supervised approach for learning to predict traversable paths for wheeled mobile robots.
Our key inspiration is that traction can be estimated for rolling robots using kinodynamic models.
We show that our training pipeline based on online traction estimates is more data-efficient than other-based methods.
arXiv Detail & Related papers (2022-03-22T22:02:03Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Dynamic Fusion Module Evolves Drivable Area and Road Anomaly Detection:
A Benchmark and Algorithms [16.417299198546168]
Joint detection of drivable areas and road anomalies is very important for mobile robots.
In this paper, we first build a drivable area and road anomaly detection benchmark for ground mobile robots.
We propose a novel module, referred to as the dynamic fusion module (DFM), which can be easily deployed in existing data-fusion networks.
arXiv Detail & Related papers (2021-03-03T14:38:27Z) - Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device [53.323878851563414]
We propose a compiler-aware unified framework incorporating network enhancement and pruning search with the reinforcement learning techniques.
Specifically, a generator Recurrent Neural Network (RNN) is employed to provide the unified scheme for both network enhancement and pruning search automatically.
The proposed framework achieves real-time 3D object detection on mobile devices with competitive detection performance.
arXiv Detail & Related papers (2020-12-26T19:41:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.