PUCK: Parallel Surface and Convolution-kernel Tracking for Event-Based
Cameras
- URL: http://arxiv.org/abs/2205.07657v1
- Date: Mon, 16 May 2022 13:23:52 GMT
- Title: PUCK: Parallel Surface and Convolution-kernel Tracking for Event-Based
Cameras
- Authors: Luna Gava, Marco Monforte, Massimiliano Iacono, Chiara Bartolozzi,
Arren Glover
- Abstract summary: Event-cameras can guarantee fast visual sensing in dynamic environments, but require a tracking algorithm that can keep up with the high data rate induced by the robot ego-motion.
We introduce a novel tracking method that leverages the Exponential Reduced Ordinal Surface (EROS) data representation to decouple event-by-event processing and tracking.
We propose the task of tracking the air hockey puck sliding on a surface, with the future aim of controlling the iCub robot to reach the target precisely and on time.
- Score: 4.110120522045467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Low latency and accuracy are fundamental requirements when vision is
integrated in robots for high-speed interaction with targets, since they affect
system reliability and stability. In such a scenario, the choice of the sensor
and algorithms is important for the entire control loop. The technology of
event-cameras can guarantee fast visual sensing in dynamic environments, but
requires a tracking algorithm that can keep up with the high data rate induced
by the robot ego-motion while maintaining accuracy and robustness to
distractors. In this paper, we introduce a novel tracking method that leverages
the Exponential Reduced Ordinal Surface (EROS) data representation to decouple
event-by-event processing and tracking computation. The latter is performed
using convolution kernels to detect and follow a circular target moving on a
plane. To benchmark state-of-the-art event-based tracking, we propose the task
of tracking the air hockey puck sliding on a surface, with the future aim of
controlling the iCub robot to reach the target precisely and on time.
Experimental results demonstrate that our algorithm achieves the best
compromise between low latency and tracking accuracy both when the robot is
still and when moving.
Related papers
- Deep Learning-Based Robust Multi-Object Tracking via Fusion of mmWave Radar and Camera Sensors [6.166992288822812]
Multi-Object Tracking plays a critical role in ensuring safer and more efficient navigation through complex traffic scenarios.
This paper presents a novel deep learning-based method that integrates radar and camera data to enhance the accuracy and robustness of Multi-Object Tracking in autonomous driving systems.
arXiv Detail & Related papers (2024-07-10T21:09:09Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - Neural Implicit Swept Volume Models for Fast Collision Detection [0.0]
We present an algorithm combining the speed of the deep learning-based signed distance computations with the strong accuracy guarantees of geometric collision checkers.
We validate our approach in simulated and real-world robotic experiments, and demonstrate that it is able to speed up a commercial bin picking application.
arXiv Detail & Related papers (2024-02-23T12:06:48Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Event Camera-based Visual Odometry for Dynamic Motion Tracking of a
Legged Robot Using Adaptive Time Surface [5.341864681049579]
Event cameras offer high temporal resolution and dynamic range, which can eliminate the issue of blurred RGB images during fast movements.
We introduce an adaptive time surface (ATS) method that addresses the whiteout and blackout issue in conventional time surfaces.
Lastly, we propose a nonlinear pose optimization formula that simultaneously performs 3D-2D alignment on both RGB-based and event-based maps and images.
arXiv Detail & Related papers (2023-05-15T19:03:45Z) - EV-Catcher: High-Speed Object Catching Using Low-latency Event-based
Neural Networks [107.62975594230687]
We demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects.
We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency.
We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms.
arXiv Detail & Related papers (2023-04-14T15:23:28Z) - Fast Trajectory End-Point Prediction with Event Cameras for Reactive
Robot Control [4.110120522045467]
In this paper, we propose to exploit the low latency, motion-driven sampling, and data compression properties of event cameras to overcome these issues.
As a use-case, we use a Panda robotic arm to intercept a ball bouncing on a table.
We train the network in simulation to speed up the dataset acquisition and then fine-tune the models on real trajectories.
arXiv Detail & Related papers (2023-02-27T14:14:52Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - CNN-based Omnidirectional Object Detection for HermesBot Autonomous
Delivery Robot with Preliminary Frame Classification [53.56290185900837]
We propose an algorithm for optimizing a neural network for object detection using preliminary binary frame classification.
An autonomous mobile robot with 6 rolling-shutter cameras on the perimeter providing a 360-degree field of view was used as the experimental setup.
arXiv Detail & Related papers (2021-10-22T15:05:37Z) - Benchmarking high-fidelity pedestrian tracking systems for research,
real-time monitoring and crowd control [55.41644538483948]
High-fidelity pedestrian tracking in real-life conditions has been an important tool in fundamental crowd dynamics research.
As this technology advances, it is becoming increasingly useful also in society.
To successfully employ pedestrian tracking techniques in research and technology, it is crucial to validate and benchmark them for accuracy.
We present and discuss a benchmark suite, towards an open standard in the community, for privacy-respectful pedestrian tracking techniques.
arXiv Detail & Related papers (2021-08-26T11:45:26Z) - Neuromorphic Eye-in-Hand Visual Servoing [0.9949801888214528]
Event cameras give human-like vision capabilities with low latency and wide dynamic range.
We present a visual servoing method using an event camera and a switching control strategy to explore, reach and grasp.
Experiments prove the effectiveness of the method to track and grasp objects of different shapes without the need for re-tuning.
arXiv Detail & Related papers (2020-04-15T23:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.