Neuromorphic Eye-in-Hand Visual Servoing
- URL: http://arxiv.org/abs/2004.07398v1
- Date: Wed, 15 Apr 2020 23:57:54 GMT
- Title: Neuromorphic Eye-in-Hand Visual Servoing
- Authors: Rajkumar Muthusamy, Abdulla Ayyad, Mohamad Halwani, Yahya Zweiri,
Dongming Gan and Lakmal Seneviratne
- Abstract summary: Event cameras give human-like vision capabilities with low latency and wide dynamic range.
We present a visual servoing method using an event camera and a switching control strategy to explore, reach and grasp.
Experiments prove the effectiveness of the method to track and grasp objects of different shapes without the need for re-tuning.
- Score: 0.9949801888214528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robotic vision plays a major role in factory automation to service robot
applications. However, the traditional use of frame-based camera sets a
limitation on continuous visual feedback due to their low sampling rate and
redundant data in real-time image processing, especially in the case of
high-speed tasks. Event cameras give human-like vision capabilities such as
observing the dynamic changes asynchronously at a high temporal resolution
($1\mu s$) with low latency and wide dynamic range.
In this paper, we present a visual servoing method using an event camera and
a switching control strategy to explore, reach and grasp to achieve a
manipulation task. We devise three surface layers of active events to directly
process stream of events from relative motion. A purely event based approach is
adopted to extract corner features, localize them robustly using heat maps and
generate virtual features for tracking and alignment. Based on the visual
feedback, the motion of the robot is controlled to make the temporal upcoming
event features converge to the desired event in spatio-temporal space. The
controller switches its strategy based on the sequence of operation to
establish a stable grasp. The event based visual servoing (EVBS) method is
validated experimentally using a commercial robot manipulator in an eye-in-hand
configuration. Experiments prove the effectiveness of the EBVS method to track
and grasp objects of different shapes without the need for re-tuning.
Related papers
- E-Motion: Future Motion Simulation via Event Sequence Diffusion [86.80533612211502]
Event-based sensors may potentially offer a unique opportunity to predict future motion with a level of detail and precision previously unachievable.
We propose to integrate the strong learning capacity of the video diffusion model with the rich motion information of an event camera as a motion simulation framework.
Our findings suggest a promising direction for future research in enhancing the interpretative power and predictive accuracy of computer vision systems.
arXiv Detail & Related papers (2024-10-11T09:19:23Z) - Motion Segmentation for Neuromorphic Aerial Surveillance [42.04157319642197]
Event cameras offer superior temporal resolution, superior dynamic range, and minimal power requirements.
Unlike traditional frame-based sensors that capture redundant information at fixed intervals, event cameras asynchronously record pixel-level brightness changes.
We introduce a novel motion segmentation method that leverages self-supervised vision transformers on both event data and optical flow information.
arXiv Detail & Related papers (2024-05-24T04:36:13Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - PUCK: Parallel Surface and Convolution-kernel Tracking for Event-Based
Cameras [4.110120522045467]
Event-cameras can guarantee fast visual sensing in dynamic environments, but require a tracking algorithm that can keep up with the high data rate induced by the robot ego-motion.
We introduce a novel tracking method that leverages the Exponential Reduced Ordinal Surface (EROS) data representation to decouple event-by-event processing and tracking.
We propose the task of tracking the air hockey puck sliding on a surface, with the future aim of controlling the iCub robot to reach the target precisely and on time.
arXiv Detail & Related papers (2022-05-16T13:23:52Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - C^3Net: End-to-End deep learning for efficient real-time visual active
camera control [4.09920839425892]
The need for automated real-time visual systems in applications such as smart camera surveillance, smart environments, and drones necessitates the improvement of methods for visual active monitoring and control.
In this paper a deep Convolutional Camera Controller Neural Network is proposed to go directly from visual information to camera movement.
It is trained end-to-end without bounding box annotations to control a camera and follow multiple targets from raw pixel values.
arXiv Detail & Related papers (2021-07-28T09:31:46Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill
Primitives [89.34229413345541]
We propose a conditioning scheme which avoids pitfalls by learning the controller and its conditioning in an end-to-end manner.
Our model predicts complex action sequences based directly on a dynamic image representation of the robot motion.
We report significant improvements in task success over representative MPC and IL baselines.
arXiv Detail & Related papers (2020-03-19T15:04:37Z) - End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking [35.95703377642108]
We propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking.
To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay representation.
We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) perform to an end-to-end 5-DoF object motion regression.
arXiv Detail & Related papers (2020-02-14T08:19:50Z) - Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking [87.0297771292994]
We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
arXiv Detail & Related papers (2020-02-13T15:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.