A Simple and Effective Point-based Network for Event Camera 6-DOFs Pose Relocalization
- URL: http://arxiv.org/abs/2403.19412v1
- Date: Thu, 28 Mar 2024 13:36:00 GMT
- Title: A Simple and Effective Point-based Network for Event Camera 6-DOFs Pose Relocalization
- Authors: Hongwei Ren, Jiadong Zhu, Yue Zhou, Haotian FU, Yulong Huang, Bojun Cheng,
- Abstract summary: Event cameras exhibit remarkable attributes such as high dynamic range, asynchronicity, and low latency.
These cameras implicitly capture movement and depth information in events, making them appealing sensors for Camera Pose Relocalization (CPR) tasks.
Existing CPR networks based on events neglect the pivotal fine-grained temporal information in events, resulting in unsatisfactory performance.
We introduce PEPNet, a simple and effective point-based network designed to regress six degrees of freedom (6-DOFs) event camera poses.
- Score: 6.691696783214036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras exhibit remarkable attributes such as high dynamic range, asynchronicity, and low latency, making them highly suitable for vision tasks that involve high-speed motion in challenging lighting conditions. These cameras implicitly capture movement and depth information in events, making them appealing sensors for Camera Pose Relocalization (CPR) tasks. Nevertheless, existing CPR networks based on events neglect the pivotal fine-grained temporal information in events, resulting in unsatisfactory performance. Moreover, the energy-efficient features are further compromised by the use of excessively complex models, hindering efficient deployment on edge devices. In this paper, we introduce PEPNet, a simple and effective point-based network designed to regress six degrees of freedom (6-DOFs) event camera poses. We rethink the relationship between the event camera and CPR tasks, leveraging the raw Point Cloud directly as network input to harness the high-temporal resolution and inherent sparsity of events. PEPNet is adept at abstracting the spatial and implicit temporal features through hierarchical structure and explicit temporal features by Attentive Bi-directional Long Short-Term Memory (A-Bi-LSTM). By employing a carefully crafted lightweight design, PEPNet delivers state-of-the-art (SOTA) performance on both indoor and outdoor datasets with meager computational resources. Specifically, PEPNet attains a significant 38% and 33% performance improvement on the random split IJRR and M3ED datasets, respectively. Moreover, the lightweight design version PEPNet$_{tiny}$ accomplishes results comparable to the SOTA while employing a mere 0.5% of the parameters.
Related papers
- Labits: Layered Bidirectional Time Surfaces Representation for Event Camera-based Continuous Dense Trajectory Estimation [1.3416369506987165]
Event cameras capture dynamic scenes with high temporal resolution and low latency.
We introduce Labits: Layered Bidirectional Time Surfaces, a simple yet elegant representation designed to retain all these features.
Our approach achieves an impressive 49% reduction in trajectory end-point error (TEPE) compared to the previous state-of-the-art on the MultiFlow dataset.
arXiv Detail & Related papers (2024-12-12T01:11:50Z) - EEPNet: Efficient Edge Pixel-based Matching Network for Cross-Modal Dynamic Registration between LiDAR and Camera [6.817117737186402]
Multisensor fusion is essential for autonomous vehicles to accurately perceive, analyze, and plan their trajectories within complex environments.
Current methods for registering LiDAR point clouds with images face significant challenges due to inherent differences and computational overhead.
We propose EEPNet, an advanced network that leverages modality maps obtained from point cloud projections to enhance registration accuracy.
arXiv Detail & Related papers (2024-09-28T10:28:28Z) - FAPNet: An Effective Frequency Adaptive Point-based Eye Tracker [0.6554326244334868]
Event cameras are an alternative to traditional cameras in the realm of eye tracking.
Existing event-based eye tracking networks neglect the pivotal sparse and fine-grained temporal information in events.
In this paper, we utilize Point Cloud as the event representation to harness the high temporal resolution and sparse characteristics of events in eye tracking tasks.
arXiv Detail & Related papers (2024-06-05T12:08:01Z) - SpikePoint: An Efficient Point-based Spiking Neural Network for Event
Cameras Action Recognition [11.178792888084692]
Spiking Neural Networks (SNNs) have gained significant attention due to their remarkable efficiency and fault tolerance.
We propose SpikePoint, a novel end-to-end point-based SNN architecture.
SpikePoint excels at processing sparse event cloud data, effectively extracting both global and local features.
arXiv Detail & Related papers (2023-10-11T04:38:21Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - EV-Catcher: High-Speed Object Catching Using Low-latency Event-based
Neural Networks [107.62975594230687]
We demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects.
We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency.
We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms.
arXiv Detail & Related papers (2023-04-14T15:23:28Z) - Recurrent Vision Transformers for Object Detection with Event Cameras [62.27246562304705]
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras.
RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection.
Our study brings new insights into effective design choices that can be fruitful for research beyond event-based vision.
arXiv Detail & Related papers (2022-12-11T20:28:59Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - PIDNet: An Efficient Network for Dynamic Pedestrian Intrusion Detection [22.316826418265666]
Vision-based dynamic pedestrian intrusion detection (PID), judging whether pedestrians intrude an area-of-interest (AoI) by a moving camera, is an important task in mobile surveillance.
We propose a novel and efficient multi-task deep neural network, PIDNet, to solve this problem.
PIDNet is mainly designed by considering two factors: accurately segmenting the dynamically changing AoIs from a video frame captured by the moving camera and quickly detecting pedestrians from the generated AoI-contained areas.
arXiv Detail & Related papers (2020-09-01T09:34:43Z) - Real-Time High-Performance Semantic Image Segmentation of Urban Street
Scenes [98.65457534223539]
We propose a real-time high-performance DCNN-based method for robust semantic segmentation of urban street scenes.
The proposed method achieves the accuracy of 73.6% and 68.0% mean Intersection over Union (mIoU) with the inference speed of 51.0 fps and 39.3 fps.
arXiv Detail & Related papers (2020-03-11T08:45:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.