Domain Adaptation for Outdoor Robot Traversability Estimation from RGB
data with Safety-Preserving Loss
- URL: http://arxiv.org/abs/2009.07565v1
- Date: Wed, 16 Sep 2020 09:19:33 GMT
- Title: Domain Adaptation for Outdoor Robot Traversability Estimation from RGB
data with Safety-Preserving Loss
- Authors: Simone Palazzo, Dario C. Guastella, Luciano Cantelli, Paolo Spadaro,
Francesco Rundo, Giovanni Muscato, Daniela Giordano, Concetto Spampinato
- Abstract summary: We present an approach based on deep learning to estimate and anticipate the traversing score of different routes in the field of view of an on-board RGB camera.
We then enhance the model's capabilities by addressing domain shifts through gradient-reversal unsupervised adaptation.
Experimental results show that our approach is able to satisfactorily identify traversable areas and to generalize to unseen locations.
- Score: 12.697106921197701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Being able to estimate the traversability of the area surrounding a mobile
robot is a fundamental task in the design of a navigation algorithm. However,
the task is often complex, since it requires evaluating distances from
obstacles, type and slope of terrain, and dealing with non-obvious
discontinuities in detected distances due to perspective. In this paper, we
present an approach based on deep learning to estimate and anticipate the
traversing score of different routes in the field of view of an on-board RGB
camera. The backbone of the proposed model is based on a state-of-the-art deep
segmentation model, which is fine-tuned on the task of predicting route
traversability. We then enhance the model's capabilities by a) addressing
domain shifts through gradient-reversal unsupervised adaptation, and b)
accounting for the specific safety requirements of a mobile robot, by
encouraging the model to err on the safe side, i.e., penalizing errors that
would cause collisions with obstacles more than those that would cause the
robot to stop in advance. Experimental results show that our approach is able
to satisfactorily identify traversable areas and to generalize to unseen
locations.
Related papers
- OOSTraj: Out-of-Sight Trajectory Prediction With Vision-Positioning Denoising [49.86409475232849]
Trajectory prediction is fundamental in computer vision and autonomous driving.
Existing approaches in this field often assume precise and complete observational data.
We present a novel method for out-of-sight trajectory prediction that leverages a vision-positioning technique.
arXiv Detail & Related papers (2024-04-02T18:30:29Z) - Model Checking for Closed-Loop Robot Reactive Planning [0.0]
We show how model checking can be used to create multistep plans for a differential drive wheeled robot so that it can avoid immediate danger.
Using a small, purpose built model checking algorithm in situ we generate plans in real-time in a way that reflects the egocentric reactive response of simple biological agents.
arXiv Detail & Related papers (2023-11-16T11:02:29Z) - JRDB-Traj: A Dataset and Benchmark for Trajectory Forecasting in Crowds [79.00975648564483]
Trajectory forecasting models, employed in fields such as robotics, autonomous vehicles, and navigation, face challenges in real-world scenarios.
This dataset provides comprehensive data, including the locations of all agents, scene images, and point clouds, all from the robot's perspective.
The objective is to predict the future positions of agents relative to the robot using raw sensory input data.
arXiv Detail & Related papers (2023-11-05T18:59:31Z) - Neural Potential Field for Obstacle-Aware Local Motion Planning [46.42871544295734]
We propose a neural network model that returns a differentiable collision cost based on robot pose, obstacle map, and robot footprint.
Our architecture includes neural image encoders, which transform obstacle maps and robot footprints into embeddings.
Experiment on Husky UGV mobile robot showed that our approach allows real-time and safe local planning.
arXiv Detail & Related papers (2023-10-25T05:00:21Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.