Event Camera Based Real-Time Detection and Tracking of Indoor Ground
Robots
- URL: http://arxiv.org/abs/2102.11916v1
- Date: Tue, 23 Feb 2021 19:50:17 GMT
- Title: Event Camera Based Real-Time Detection and Tracking of Indoor Ground
Robots
- Authors: Himanshu Patel, Craig Iaboni, Deepan Lobo, Ji-won Choi, Pramod
Abichandani
- Abstract summary: This paper presents a real-time method to detect and track multiple mobile ground robots using event cameras.
The method uses density-based spatial clustering of applications with noise (DBSCAN) to detect the robots and a single k-dimensional (k-d) tree to accurately keep track of them as they move in an indoor arena.
- Score: 2.471139321417215
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a real-time method to detect and track multiple mobile
ground robots using event cameras. The method uses density-based spatial
clustering of applications with noise (DBSCAN) to detect the robots and a
single k-dimensional (k-d) tree to accurately keep track of them as they move
in an indoor arena. Robust detections and tracks are maintained in the face of
event camera noise and lack of events (due to robots moving slowly or
stopping). An off-the-shelf RGB camera-based tracking system was used to
provide ground truth. Experiments including up to 4 robots are performed to
study the effect of i) varying DBSCAN parameters, ii) the event accumulation
time, iii) the number of robots in the arena, and iv) the speed of the robots
on the detection and tracking performance. The experimental results showed 100%
detection and tracking fidelity in the face of event camera noise and robots
stopping for tests involving up to 3 robots (and upwards of 93% for 4 robots).
Related papers
- Robot See Robot Do: Imitating Articulated Object Manipulation with Monocular 4D Reconstruction [51.49400490437258]
This work develops a method for imitating articulated object manipulation from a single monocular RGB human demonstration.
We first propose 4D Differentiable Part Models (4D-DPM), a method for recovering 3D part motion from a monocular video.
Given this 4D reconstruction, the robot replicates object trajectories by planning bimanual arm motions that induce the demonstrated object part motion.
We evaluate 4D-DPM's 3D tracking accuracy on ground truth annotated 3D part trajectories and RSRD's physical execution performance on 9 objects across 10 trials each on a bimanual YuMi robot.
arXiv Detail & Related papers (2024-09-26T17:57:16Z) - Exploring 3D Human Pose Estimation and Forecasting from the Robot's Perspective: The HARPER Dataset [52.22758311559]
We introduce HARPER, a novel dataset for 3D body pose estimation and forecast in dyadic interactions between users and Spot.
The key-novelty is the focus on the robot's perspective, i.e., on the data captured by the robot's sensors.
The scenario underlying HARPER includes 15 actions, of which 10 involve physical contact between the robot and users.
arXiv Detail & Related papers (2024-03-21T14:53:50Z) - Care3D: An Active 3D Object Detection Dataset of Real Robotic-Care
Environments [52.425280825457385]
This paper introduces an annotated dataset of real environments.
The captured environments represent areas which are already in use in the field of robotic health care research.
We also provide ground truth data within one room, for assessing SLAM algorithms running directly on a health care robot.
arXiv Detail & Related papers (2023-10-09T10:35:37Z) - Event-based tracking of human hands [0.6875312133832077]
Event camera detects changes in brightness, measuring motion, with low latency, no motion blur, low power consumption and high dynamic range.
Captured frames are analysed using lightweight algorithms reporting 3D hand position data.
arXiv Detail & Related papers (2023-04-13T13:43:45Z) - External Camera-based Mobile Robot Pose Estimation for Collaborative
Perception with Smart Edge Sensors [22.5939915003931]
We present an approach for estimating a mobile robot's pose w.r.t. the allocentric coordinates of a network of static cameras using multi-view RGB images.
The images are processed online, locally on smart edge sensors by deep neural networks to detect the robot.
With the robot's pose precisely estimated, its observations can be fused into the allocentric scene model.
arXiv Detail & Related papers (2023-03-07T11:03:33Z) - Neural Scene Representation for Locomotion on Structured Terrain [56.48607865960868]
We propose a learning-based method to reconstruct the local terrain for a mobile robot traversing urban environments.
Using a stream of depth measurements from the onboard cameras and the robot's trajectory, the estimates the topography in the robot's vicinity.
We propose a 3D reconstruction model that faithfully reconstructs the scene, despite the noisy measurements and large amounts of missing data coming from the blind spots of the camera arrangement.
arXiv Detail & Related papers (2022-06-16T10:45:17Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - Single-view robot pose and joint angle estimation via render & compare [40.05546237998603]
We introduce RoboPose, a method to estimate the joint angles and the 6D camera-to-robot pose of a known articulated robot from a single RGB image.
This is an important problem to grant mobile and itinerant autonomous systems the ability to interact with other robots.
arXiv Detail & Related papers (2021-04-19T14:48:29Z) - Deep Reinforcement learning for real autonomous mobile robot navigation
in indoor environments [0.0]
We present our proof of concept for autonomous self-learning robot navigation in an unknown environment for a real robot without a map or planner.
The input for the robot is only the fused data from a 2D laser scanner and a RGB-D camera as well as the orientation to the goal.
The output actions of an Asynchronous Advantage Actor-Critic network (GA3C) are the linear and angular velocities for the robot.
arXiv Detail & Related papers (2020-05-28T09:15:14Z) - Exploration of Reinforcement Learning for Event Camera using Car-like
Robots [10.66048003460524]
We demonstrate the first reinforcement-learning application for robots equipped with an event camera.
Because of the considerably lower latency of the event camera, it is possible to achieve much faster control of robots.
arXiv Detail & Related papers (2020-04-02T03:52:03Z) - Morphology-Agnostic Visual Robotic Control [76.44045983428701]
MAVRIC is an approach that works with minimal prior knowledge of the robot's morphology.
We demonstrate our method on visually-guided 3D point reaching, trajectory following, and robot-to-robot imitation.
arXiv Detail & Related papers (2019-12-31T15:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.