Detection of Binary Square Fiducial Markers Using an Event Camera
- URL: http://arxiv.org/abs/2012.06516v3
- Date: Mon, 15 Mar 2021 21:23:51 GMT
- Title: Detection of Binary Square Fiducial Markers Using an Event Camera
- Authors: Hamid Sarmadi, Rafael Mu\~noz-Salinas, Miguel A. Olivares-Mendez,
Rafael Medina-Carnicer
- Abstract summary: Event cameras are a new type of image sensors that output changes in light intensity (events) instead of absolute intensity values.
We propose a method to detect and decode binary square markers using an event camera.
- Score: 1.0781866671930855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are a new type of image sensors that output changes in light
intensity (events) instead of absolute intensity values. They have a very high
temporal resolution and a high dynamic range. In this paper, we propose a
method to detect and decode binary square markers using an event camera. We
detect the edges of the markers by detecting line segments in an image created
from events in the current packet. The line segments are combined to form
marker candidates. The bit value of marker cells is decoded using the events on
their borders. To the best of our knowledge, no other approach exists for
detecting square binary markers directly from an event camera using only the
CPU unit in real-time. Experimental results show that the performance of our
proposal is much superior to the one from the RGB ArUco marker detector. The
proposed method can achieve the real-time performance on a single CPU thread.
Related papers
- Learning to Make Keypoints Sub-Pixel Accurate [80.55676599677824]
This work addresses the challenge of sub-pixel accuracy in detecting 2D local features.
We propose a novel network that enhances any detector with sub-pixel precision by learning an offset vector for detected features.
arXiv Detail & Related papers (2024-07-16T12:39:56Z) - Graph-based Asynchronous Event Processing for Rapid Object Recognition [59.112755601918074]
Event cameras capture asynchronous events stream in which each event encodes pixel location, trigger time, and the polarity of the brightness changes.
We introduce a novel graph-based framework for event cameras, namely SlideGCN.
Our approach can efficiently process data event-by-event, unlock the low latency nature of events data while still maintaining the graph's structure internally.
arXiv Detail & Related papers (2023-08-28T08:59:57Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - How Many Events do You Need? Event-based Visual Place Recognition Using
Sparse But Varying Pixels [29.6328152991222]
One of the potential applications of event camera research lies in visual place recognition for robot localization.
We show that the absolute difference in the number of events at those pixel locations accumulated into event frames can be sufficient for the place recognition task.
We evaluate our proposed approach on the Brisbane-Event-VPR dataset in an outdoor driving scenario, as well as the newly contributed indoor QCR-Event-VPR dataset.
arXiv Detail & Related papers (2022-06-28T00:24:12Z) - Application of Ghost-DeblurGAN to Fiducial Marker Detection [1.1470070927586016]
This paper develops a lightweight generative adversarial network, named Ghost-DeGAN, for real-time motion deblurring.
A new large-scale dataset, YorkTag, is proposed that provides pairs of sharp/blurred images containing fiducial markers.
With the proposed model trained and tested on YorkTag, it is demonstrated that when applied along with fiducial marker systems to motion-blurred images, Ghost-DeblurGAN improves the marker detection significantly.
arXiv Detail & Related papers (2021-09-08T00:59:10Z) - Bridging the Gap between Events and Frames through Unsupervised Domain
Adaptation [57.22705137545853]
We propose a task transfer method that allows models to be trained directly with labeled images and unlabeled event data.
We leverage the generative event model to split event features into content and motion features.
Our approach unlocks the vast amount of existing image datasets for the training of event-based neural networks.
arXiv Detail & Related papers (2021-09-06T17:31:37Z) - ELSED: Enhanced Line SEgment Drawing [2.470815298095903]
ELSED is the fastest line segment detector in the literature.
The proposed algorithm not only runs in devices with very low end hardware, but may also be parametrized to foster the detection of short or longer segments.
arXiv Detail & Related papers (2021-08-06T14:33:57Z) - DeepTag: A General Framework for Fiducial Marker Design and Detection [1.2180122937388957]
We propose a general deep learning based framework, DeepTag, for fiducial marker design and detection.
DeepTag supports detection of a wide variety of existing marker families and makes it possible to design new marker families with customized local patterns.
Experiments show that DeepTag well supports different marker families and greatly outperforms the existing methods in terms of both detection robustness and pose accuracy.
arXiv Detail & Related papers (2021-05-28T10:54:59Z) - SE-Harris and eSUSAN: Asynchronous Event-Based Corner Detection Using
Megapixel Resolution CeleX-V Camera [9.314068908300285]
Event cameras generate an asynchronous event stream of per-pixel intensity changes with precise timestamps.
We propose a corner detection algorithm, eSUSAN, inspired by the conventional SUSAN (smallest univalue segment assimilating nucleus) algorithm for corner detection.
We also propose the SE-Harris corner detector, which uses adaptive normalization based on exponential decay to quickly construct a local surface of active events.
arXiv Detail & Related papers (2021-05-02T14:06:28Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.