Event-based vision on FPGAs -- a survey
- URL: http://arxiv.org/abs/2407.08356v1
- Date: Thu, 11 Jul 2024 10:07:44 GMT
- Title: Event-based vision on FPGAs -- a survey
- Authors: Tomasz Kryjak,
- Abstract summary: Field programmable gate Arrays (FPGAs) have enabled fast data processing (even in real-time) and energy efficiency.
This paper gives an overview of the most important works, where FPGAs have been used in different contexts to process event data.
It covers applications in the following areas: filtering, stereovision, optical flow, acceleration of AI-based algorithms for object classification, detection and tracking, and applications in robotics and inspection systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years there has been a growing interest in event cameras, i.e. vision sensors that record changes in illumination independently for each pixel. This type of operation ensures that acquisition is possible in very adverse lighting conditions, both in low light and high dynamic range, and reduces average power consumption. In addition, the independent operation of each pixel results in low latency, which is desirable for robotic solutions. Nowadays, Field Programmable Gate Arrays (FPGAs), along with general-purpose processors (GPPs/CPUs) and programmable graphics processing units (GPUs), are popular architectures for implementing and accelerating computing tasks. In particular, their usefulness in the embedded vision domain has been repeatedly demonstrated over the past 30 years, where they have enabled fast data processing (even in real-time) and energy efficiency. Hence, the combination of event cameras and reconfigurable devices seems to be a good solution, especially in the context of energy-efficient real-time embedded systems. This paper gives an overview of the most important works, where FPGAs have been used in different contexts to process event data. It covers applications in the following areas: filtering, stereovision, optical flow, acceleration of AI-based algorithms (including spiking neural networks) for object classification, detection and tracking, and applications in robotics and inspection systems. Current trends and challenges for such systems are also discussed.
Related papers
- HAPM -- Hardware Aware Pruning Method for CNN hardware accelerators in resource constrained devices [44.99833362998488]
The present work proposes a generic hardware architecture ready to be implemented on FPGA devices.
The inference speed of the design is evaluated over different resource constrained FPGA devices.
We demonstrate that our hardware-aware pruning algorithm achieves a remarkable improvement of a 45 % in inference time compared to a network pruned using the standard algorithm.
arXiv Detail & Related papers (2024-08-26T07:27:12Z) - FAPNet: An Effective Frequency Adaptive Point-based Eye Tracker [0.6554326244334868]
Event cameras are an alternative to traditional cameras in the realm of eye tracking.
Existing event-based eye tracking networks neglect the pivotal sparse and fine-grained temporal information in events.
In this paper, we utilize Point Cloud as the event representation to harness the high temporal resolution and sparse characteristics of events in eye tracking tasks.
arXiv Detail & Related papers (2024-06-05T12:08:01Z) - SATAY: A Streaming Architecture Toolflow for Accelerating YOLO Models on
FPGA Devices [48.47320494918925]
This work tackles the challenges of deploying stateof-the-art object detection models onto FPGA devices for ultralow latency applications.
We employ a streaming architecture design for our YOLO accelerators, implementing the complete model on-chip in a deeply pipelined fashion.
We introduce novel hardware components to support the operations of YOLO models in a dataflow manner, and off-chip memory buffering to address the limited on-chip memory resources.
arXiv Detail & Related papers (2023-09-04T13:15:01Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - Eventor: An Efficient Event-Based Monocular Multi-View Stereo
Accelerator on FPGA Platform [11.962626341154609]
Event cameras are bio-inspired vision sensors that asynchronously represent pixel-level brightness changes as event streams.
EMVS is a technique that exploits the event streams to estimate semi-dense 3D structure with known trajectory.
In this paper, Eventor is proposed as a fast and efficient EMVS accelerator by realizing the most critical and time-consuming stages.
arXiv Detail & Related papers (2022-03-29T11:13:36Z) - Optical flow-based branch segmentation for complex orchard environments [73.11023209243326]
We train a neural network system in simulation only using simulated RGB data and optical flow.
This resulting neural network is able to perform foreground segmentation of branches in a busy orchard environment without additional real-world training or using any special setup or equipment beyond a standard camera.
Our results show that our system is highly accurate and, when compared to a network using manually labeled RGBD data, achieves significantly more consistent and robust performance across environments that differ from the training set.
arXiv Detail & Related papers (2022-02-26T03:38:20Z) - hARMS: A Hardware Acceleration Architecture for Real-Time Event-Based
Optical Flow [0.0]
Event-based vision sensors produce asynchronous event streams with high temporal resolution based on changes in the visual scene.
Existing solutions for calculating optical flow from event data fail to capture the true direction of motion due to the aperture problem.
We present a hardware realization of the fARMS algorithm allowing for real-time computation of true flow on low-power, embedded platforms.
arXiv Detail & Related papers (2021-12-13T16:27:17Z) - Meta-UDA: Unsupervised Domain Adaptive Thermal Object Detection using
Meta-Learning [64.92447072894055]
Infrared (IR) cameras are robust under adverse illumination and lighting conditions.
We propose an algorithm meta-learning framework to improve existing UDA methods.
We produce a state-of-the-art thermal detector for the KAIST and DSIAC datasets.
arXiv Detail & Related papers (2021-10-07T02:28:18Z) - Enabling energy efficient machine learning on a Ultra-Low-Power vision
sensor for IoT [3.136861161060886]
This paper presents the development, analysis, and embedded implementation of a realtime detection, classification and tracking pipeline.
The power consumption obtained for the inference - which requires 8ms - is 7.5 mW.
arXiv Detail & Related papers (2021-02-02T06:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.