Asynchronous Corner Tracking Algorithm based on Lifetime of Events for
DAVIS Cameras
- URL: http://arxiv.org/abs/2010.15510v1
- Date: Thu, 29 Oct 2020 12:02:40 GMT
- Title: Asynchronous Corner Tracking Algorithm based on Lifetime of Events for
DAVIS Cameras
- Authors: Sherif A.S. Mohamed, Jawad N. Yasin, Mohammad-Hashem Haghbayan,
Antonio Miele, Jukka Heikkonen, Hannu Tenhunen, and Juha Plosila
- Abstract summary: Event cameras, i.e., the Dynamic and Active-pixel Vision Sensor (DAVIS) ones, capture the intensity changes in the scene and generates a stream of events in an asynchronous fashion.
The output rate of such cameras can reach up to 10 million events per second in high dynamic environments.
A novel asynchronous corner tracking method is proposed that uses both events and intensity images captured by a DAVIS camera.
- Score: 0.9988653233188148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras, i.e., the Dynamic and Active-pixel Vision Sensor (DAVIS) ones,
capture the intensity changes in the scene and generates a stream of events in
an asynchronous fashion. The output rate of such cameras can reach up to 10
million events per second in high dynamic environments. DAVIS cameras use novel
vision sensors that mimic human eyes. Their attractive attributes, such as high
output rate, High Dynamic Range (HDR), and high pixel bandwidth, make them an
ideal solution for applications that require high-frequency tracking. Moreover,
applications that operate in challenging lighting scenarios can exploit the
high HDR of event cameras, i.e., 140 dB compared to 60 dB of traditional
cameras. In this paper, a novel asynchronous corner tracking method is proposed
that uses both events and intensity images captured by a DAVIS camera. The
Harris algorithm is used to extract features, i.e., frame-corners from
keyframes, i.e., intensity images. Afterward, a matching algorithm is used to
extract event-corners from the stream of events. Events are solely used to
perform asynchronous tracking until the next keyframe is captured. Neighboring
events, within a window size of 5x5 pixels around the event-corner, are used to
calculate the velocity and direction of extracted event-corners by fitting the
2D planar using a randomized Hough transform algorithm. Experimental evaluation
showed that our approach is able to update the location of the extracted
corners up to 100 times during the blind time of traditional cameras, i.e.,
between two consecutive intensity images.
Related papers
- BlinkTrack: Feature Tracking over 100 FPS via Events and Images [50.98675227695814]
We propose a novel framework, BlinkTrack, which integrates event data with RGB images for high-frequency feature tracking.
Our method extends the traditional Kalman filter into a learning-based framework, utilizing differentiable Kalman filters in both event and image branches.
Experimental results indicate that BlinkTrack significantly outperforms existing event-based methods.
arXiv Detail & Related papers (2024-09-26T15:54:18Z) - Temporal-Mapping Photography for Event Cameras [5.838762448259289]
Event cameras capture brightness changes as a continuous stream of events'' rather than traditional intensity frames.
We realize events to dense intensity image conversion using a stationary event camera in static scenes.
arXiv Detail & Related papers (2024-03-11T05:29:46Z) - EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms
with Real-captured Hybrid Dataset [55.12137324648253]
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.
This paper focuses on five event-aided image and video enhancement tasks.
arXiv Detail & Related papers (2023-12-13T15:42:04Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - E$^2$(GO)MOTION: Motion Augmented Event Stream for Egocentric Action
Recognition [21.199869051111367]
Event cameras capture pixel-level intensity changes in the form of "events"
N-EPIC-Kitchens is the first event-based camera extension of the large-scale EPIC-Kitchens dataset.
We show that event data provides a comparable performance to RGB and optical flow, yet without any additional flow computation at deploy time.
arXiv Detail & Related papers (2021-12-07T09:43:08Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z) - SE-Harris and eSUSAN: Asynchronous Event-Based Corner Detection Using
Megapixel Resolution CeleX-V Camera [9.314068908300285]
Event cameras generate an asynchronous event stream of per-pixel intensity changes with precise timestamps.
We propose a corner detection algorithm, eSUSAN, inspired by the conventional SUSAN (smallest univalue segment assimilating nucleus) algorithm for corner detection.
We also propose the SE-Harris corner detector, which uses adaptive normalization based on exponential decay to quickly construct a local surface of active events.
arXiv Detail & Related papers (2021-05-02T14:06:28Z) - Combining Events and Frames using Recurrent Asynchronous Multimodal
Networks for Monocular Depth Prediction [51.072733683919246]
We introduce Recurrent Asynchronous Multimodal (RAM) networks to handle asynchronous and irregular data from multiple sensors.
Inspired by traditional RNNs, RAM networks maintain a hidden state that is updated asynchronously and can be queried at any time to generate a prediction.
We show an improvement over state-of-the-art methods by up to 30% in terms of mean depth absolute error.
arXiv Detail & Related papers (2021-02-18T13:24:35Z) - An Asynchronous Kalman Filter for Hybrid Event Cameras [13.600773150848543]
Event cameras are ideally suited to capture HDR visual information without blur.
conventional image sensors measure absolute intensity of slowly changing scenes effectively but do poorly on high dynamic range or quickly changing scenes.
We present an event-based video reconstruction pipeline for High Dynamic Range scenarios.
arXiv Detail & Related papers (2020-12-10T11:24:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.