Agile Reactive Navigation for A Non-Holonomic Mobile Robot Using A Pixel
Processor Array
- URL: http://arxiv.org/abs/2009.12796v1
- Date: Sun, 27 Sep 2020 09:11:31 GMT
- Title: Agile Reactive Navigation for A Non-Holonomic Mobile Robot Using A Pixel
Processor Array
- Authors: Yanan Liu, Laurie Bose, Colin Greatwood, Jianing Chen, Rui Fan, Thomas
Richardson, Stephen J. Carey, Piotr Dudek, Walterio Mayol-Cuevas
- Abstract summary: This paper presents an agile reactive navigation strategy for driving a non-holonomic ground vehicle around a preset course of gates in a cluttered environment using a low-cost processor array sensor.
We demonstrate a small ground vehicle running through or avoiding multiple gates at high speed using minimal computational resources.
- Score: 22.789108850681146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents an agile reactive navigation strategy for driving a
non-holonomic ground vehicle around a preset course of gates in a cluttered
environment using a low-cost processor array sensor. This enables machine
vision tasks to be performed directly upon the sensor's image plane, rather
than using a separate general-purpose computer. We demonstrate a small ground
vehicle running through or avoiding multiple gates at high speed using minimal
computational resources. To achieve this, target tracking algorithms are
developed for the Pixel Processing Array and captured images are then processed
directly on the vision sensor acquiring target information for controlling the
ground vehicle. The algorithm can run at up to 2000 fps outdoors and 200fps at
indoor illumination levels. Conducting image processing at the sensor level
avoids the bottleneck of image transfer encountered in conventional sensors.
The real-time performance of on-board image processing and robustness is
validated through experiments. Experimental results demonstrate that the
algorithm's ability to enable a ground vehicle to navigate at an average speed
of 2.20 m/s for passing through multiple gates and 3.88 m/s for a 'slalom' task
in an environment featuring significant visual clutter.
Related papers
- NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Design and Flight Demonstration of a Quadrotor for Urban Mapping and Target Tracking Research [0.04712282770819683]
This paper describes the hardware design and flight demonstration of a small quadrotor with imaging sensors for urban mapping, hazard avoidance, and target tracking research.
The vehicle is equipped with five cameras, including two pairs of fisheye stereo cameras that enable a nearly omnidirectional view and a two-axis gimbaled camera.
An onboard NVIDIA Jetson Orin Nano computer running the Robot Operating System software is used for data collection.
arXiv Detail & Related papers (2024-02-20T18:06:00Z) - Deep Learning Computer Vision Algorithms for Real-time UAVs On-board
Camera Image Processing [77.34726150561087]
This paper describes how advanced deep learning based computer vision algorithms are applied to enable real-time on-board sensor processing for small UAVs.
All algorithms have been developed using state-of-the-art image processing methods based on deep neural networks.
arXiv Detail & Related papers (2022-11-02T11:10:42Z) - A direct time-of-flight image sensor with in-pixel surface detection and
dynamic vision [0.0]
3D flash LIDAR is an alternative to the traditional scanning LIDAR systems, promising precise depth imaging in a compact form factor.
We present a 64x32 pixel (256x128 SPAD) dToF imager that overcomes these limitations by using pixels with embedded histogramming.
This reduces the size of output data frames considerably, enabling maximum frame rates in the 10 kFPS range or 100 kFPS for direct depth readings.
arXiv Detail & Related papers (2022-09-23T14:38:00Z) - GoToNet: Fast Monocular Scene Exposure and Exploration [0.6204265638103346]
We present a novel method for real-time environment exploration.
Our method requires only one look (image) to make a good tactical decision.
Two direction predictions, characterized by pixels dubbed the Goto and Lookat pixels, comprise the core of our method.
arXiv Detail & Related papers (2022-06-13T08:28:31Z) - People Tracking in Panoramic Video for Guiding Robots [2.092922495279074]
A guiding robot aims to effectively bring people to and from specific places within environments that are possibly unknown to them.
During this operation the robot should be able to detect and track the accompanied person, trying never to lose sight of her/him.
A solution to minimize this event is to use an omnidirectional camera: its 360deg Field of View (FoV) guarantees that any framed object cannot leave the FoV if not occluded or very far from the sensor.
We propose a set of targeted methods that allow to effectively adapt to panoramic videos a standard people detection and tracking pipeline originally designed for perspective cameras
arXiv Detail & Related papers (2022-06-06T16:44:38Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.