Eventor: An Efficient Event-Based Monocular Multi-View Stereo
Accelerator on FPGA Platform
- URL: http://arxiv.org/abs/2203.15439v1
- Date: Tue, 29 Mar 2022 11:13:36 GMT
- Title: Eventor: An Efficient Event-Based Monocular Multi-View Stereo
Accelerator on FPGA Platform
- Authors: Mingjun Li, Jianlei Yang, Yingjie Qi, Meng Dong, Yuhao Yang, Runze
Liu, Weitao Pan, Bei Yu, Weisheng Zhao
- Abstract summary: Event cameras are bio-inspired vision sensors that asynchronously represent pixel-level brightness changes as event streams.
EMVS is a technique that exploits the event streams to estimate semi-dense 3D structure with known trajectory.
In this paper, Eventor is proposed as a fast and efficient EMVS accelerator by realizing the most critical and time-consuming stages.
- Score: 11.962626341154609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are bio-inspired vision sensors that asynchronously represent
pixel-level brightness changes as event streams. Event-based monocular
multi-view stereo (EMVS) is a technique that exploits the event streams to
estimate semi-dense 3D structure with known trajectory. It is a critical task
for event-based monocular SLAM. However, the required intensive computation
workloads make it challenging for real-time deployment on embedded platforms.
In this paper, Eventor is proposed as a fast and efficient EMVS accelerator by
realizing the most critical and time-consuming stages including event
back-projection and volumetric ray-counting on FPGA. Highly paralleled and
fully pipelined processing elements are specially designed via FPGA and
integrated with the embedded ARM as a heterogeneous system to improve the
throughput and reduce the memory footprint. Meanwhile, the EMVS algorithm is
reformulated to a more hardware-friendly manner by rescheduling, approximate
computing and hybrid data quantization. Evaluation results on DAVIS dataset
show that Eventor achieves up to $24\times$ improvement in energy efficiency
compared with Intel i5 CPU platform.
Related papers
- Event-based vision on FPGAs -- a survey [0.0]
Field programmable gate Arrays (FPGAs) have enabled fast data processing (even in real-time) and energy efficiency.
This paper gives an overview of the most important works, where FPGAs have been used in different contexts to process event data.
It covers applications in the following areas: filtering, stereovision, optical flow, acceleration of AI-based algorithms for object classification, detection and tracking, and applications in robotics and inspection systems.
arXiv Detail & Related papers (2024-07-11T10:07:44Z) - SWAT: Scalable and Efficient Window Attention-based Transformers Acceleration on FPGAs [3.302913401404089]
Sliding window-based static sparse attention mitigates the problem by limiting the attention scope of the input tokens.
We propose a dataflow-aware FPGA-based accelerator design, SWAT, that efficiently leverages the sparsity to achieve scalable performance for long input.
arXiv Detail & Related papers (2024-05-27T10:25:08Z) - Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture
with Task-level Sparsity via Mixture-of-Experts [60.1586169973792]
M$3$ViT is the latest multi-task ViT model that introduces mixture-of-experts (MoE)
MoE achieves better accuracy and over 80% reduction computation but leaves challenges for efficient deployment on FPGA.
Our work, dubbed Edge-MoE, solves the challenges to introduce the first end-to-end FPGA accelerator for multi-task ViT with a collection of architectural innovations.
arXiv Detail & Related papers (2023-05-30T02:24:03Z) - HARFLOW3D: A Latency-Oriented 3D-CNN Accelerator Toolflow for HAR on
FPGA Devices [71.45672882756001]
This study introduces a novel streaming architecture based toolflow for mapping 3D Convolutional Neural Networks onto FPGAs.
The HARFLOW3D toolflow takes as input a 3D CNN in ONNX format and a description of the FPGA characteristics.
The ability of the toolflow to support a broad range of models and devices is shown through a number of experiments on various 3D CNN and FPGA system pairs.
arXiv Detail & Related papers (2023-03-30T08:25:27Z) - A FPGA-based architecture for real-time cluster finding in the LHCb
silicon pixel detector [0.8431877864777444]
This article describes a custom VHDL firmware implementation of a two-dimensional cluster-finder architecture for reconstructing hit positions in the new VELO detector.
The pre-processing allows the first level of the software trigger to accept a 11% higher rate of events.
It additionally allows the raw pixel data to be dropped at the readout level, thus saving approximately 14% of the DAQ bandwidth.
arXiv Detail & Related papers (2023-02-08T10:08:34Z) - RTFormer: Efficient Design for Real-Time Semantic Segmentation with
Transformer [63.25665813125223]
We propose RTFormer, an efficient dual-resolution transformer for real-time semantic segmenation.
It achieves better trade-off between performance and efficiency than CNN-based models.
Experiments on mainstream benchmarks demonstrate the effectiveness of our proposed RTFormer.
arXiv Detail & Related papers (2022-10-13T16:03:53Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - VAQF: Fully Automatic Software-hardware Co-design Framework for Low-bit
Vision Transformer [121.85581713299918]
We propose VAQF, a framework that builds inference accelerators on FPGA platforms for quantized Vision Transformers (ViTs)
Given the model structure and the desired frame rate, VAQF will automatically output the required quantization precision for activations.
This is the first time quantization has been incorporated into ViT acceleration on FPGAs.
arXiv Detail & Related papers (2022-01-17T20:27:52Z) - hARMS: A Hardware Acceleration Architecture for Real-Time Event-Based
Optical Flow [0.0]
Event-based vision sensors produce asynchronous event streams with high temporal resolution based on changes in the visual scene.
Existing solutions for calculating optical flow from event data fail to capture the true direction of motion due to the aperture problem.
We present a hardware realization of the fARMS algorithm allowing for real-time computation of true flow on low-power, embedded platforms.
arXiv Detail & Related papers (2021-12-13T16:27:17Z) - iELAS: An ELAS-Based Energy-Efficient Accelerator for Real-Time Stereo
Matching on FPGA Platform [21.435663827158564]
We propose an energy-efficient architecture for real-time ELAS-based stereo matching on FPGA platform.
Our FPGA realization achieves up to 38.4x and 3.32x frame rate improvement, and up to 27.1x and 1.13x energy efficiency improvement, respectively.
arXiv Detail & Related papers (2021-04-11T21:22:54Z) - Event-based Asynchronous Sparse Convolutional Networks [54.094244806123235]
Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
arXiv Detail & Related papers (2020-03-20T08:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.