Robot Localization and Navigation through Predictive Processing using
LiDAR
- URL: http://arxiv.org/abs/2109.04139v1
- Date: Thu, 9 Sep 2021 09:58:00 GMT
- Title: Robot Localization and Navigation through Predictive Processing using
LiDAR
- Authors: Daniel Burghardt, Pablo Lanillos
- Abstract summary: We show a proof-of-concept of the predictive processing-inspired approach to perception applied for localization and navigation using laser sensors.
We learn the generative model of the laser through self-supervised learning and perform both online state-estimation and navigation.
Results showed improved state-estimation performance when comparing to a state-of-the-art particle filter in the absence of odometry.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowing the position of the robot in the world is crucial for navigation.
Nowadays, Bayesian filters, such as Kalman and particle-based, are standard
approaches in mobile robotics. Recently, end-to-end learning has allowed for
scaling-up to high-dimensional inputs and improved generalization. However,
there are still limitations to providing reliable laser navigation. Here we
show a proof-of-concept of the predictive processing-inspired approach to
perception applied for localization and navigation using laser sensors, without
the need for odometry. We learn the generative model of the laser through
self-supervised learning and perform both online state-estimation and
navigation through stochastic gradient descent on the variational free-energy
bound. We evaluated the algorithm on a mobile robot (TIAGo Base) with a laser
sensor (SICK) in Gazebo. Results showed improved state-estimation performance
when comparing to a state-of-the-art particle filter in the absence of
odometry. Furthermore, conversely to standard Bayesian estimation approaches
our method also enables the robot to navigate when providing the desired goal
by inferring the actions that minimize the prediction error.
Related papers
- OptiState: State Estimation of Legged Robots using Gated Networks with Transformer-based Vision and Kalman Filtering [42.817893456964]
State estimation for legged robots is challenging due to their highly dynamic motion and limitations imposed by sensor accuracy.
We propose a hybrid solution that combines proprioception and exteroceptive information for estimating the state of the robot's trunk.
This framework not only furnishes accurate robot state estimates, but can minimize the nonlinear errors that arise from sensor measurements and model simplifications through learning.
arXiv Detail & Related papers (2024-01-30T03:34:25Z) - NoMaD: Goal Masked Diffusion Policies for Navigation and Exploration [57.15811390835294]
This paper describes how we can train a single unified diffusion policy to handle both goal-directed navigation and goal-agnostic exploration.
We show that this unified policy results in better overall performance when navigating to visually indicated goals in novel environments.
Our experiments, conducted on a real-world mobile robot platform, show effective navigation in unseen environments in comparison with five alternative methods.
arXiv Detail & Related papers (2023-10-11T21:07:14Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - IR-MCL: Implicit Representation-Based Online Global Localization [31.77645160411745]
In this paper, we address the problem of estimating the robots pose in an indoor environment using 2D LiDAR data.
We propose a neural occupancy field (NOF) to implicitly represent the scene using a neural network.
We show that we can accurately and efficiently localize a robot using our approach surpassing the localization performance of state-of-the-art methods.
arXiv Detail & Related papers (2022-10-06T17:59:08Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - SABER: Data-Driven Motion Planner for Autonomously Navigating
Heterogeneous Robots [112.2491765424719]
We present an end-to-end online motion planning framework that uses a data-driven approach to navigate a heterogeneous robot team towards a global goal.
We use model predictive control (SMPC) to calculate control inputs that satisfy robot dynamics, and consider uncertainty during obstacle avoidance with chance constraints.
recurrent neural networks are used to provide a quick estimate of future state uncertainty considered in the SMPC finite-time horizon solution.
A Deep Q-learning agent is employed to serve as a high-level path planner, providing the SMPC with target positions that move the robots towards a desired global goal.
arXiv Detail & Related papers (2021-08-03T02:56:21Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Mobile Robot Planner with Low-cost Cameras Using Deep Reinforcement
Learning [0.0]
This study develops a robot mobility policy based on deep reinforcement learning.
In order to bring robots to market, low-cost mass production is also an issue that needs to be addressed.
arXiv Detail & Related papers (2020-12-21T07:30:04Z) - Graph-based Proprioceptive Localization Using a Discrete Heading-Length
Feature Sequence Matching Approach [14.356113113268389]
Proprioceptive localization refers to a new class of robot egocentric localization methods.
These methods are naturally immune to bad weather, poor lighting conditions, or other extreme environmental conditions.
We provide a low cost fallback solution for localization under challenging environmental conditions.
arXiv Detail & Related papers (2020-05-27T23:10:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.