Event-Based Visual Odometry on Non-Holonomic Ground Vehicles
- URL: http://arxiv.org/abs/2401.09331v1
- Date: Wed, 17 Jan 2024 16:52:20 GMT
- Title: Event-Based Visual Odometry on Non-Holonomic Ground Vehicles
- Authors: Wanting Xu, Si'ao Zhang, Li Cui, Xin Peng, Laurent Kneip
- Abstract summary: Event-based visual odometry is shown to be reliable and robust in challenging illumination scenarios.
Our algorithm achieves accurate estimates of the vehicle's rotational velocity and thus results that are comparable to the delta rotations obtained by frame-based sensors under normal conditions.
- Score: 20.847519645153337
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the promise of superior performance under challenging conditions,
event-based motion estimation remains a hard problem owing to the difficulty of
extracting and tracking stable features from event streams. In order to
robustify the estimation, it is generally believed that fusion with other
sensors is a requirement. In this work, we demonstrate reliable, purely
event-based visual odometry on planar ground vehicles by employing the
constrained non-holonomic motion model of Ackermann steering platforms. We
extend single feature n-linearities for regular frame-based cameras to the case
of quasi time-continuous event-tracks, and achieve a polynomial form via
variable degree Taylor expansions. Robust averaging over multiple event tracks
is simply achieved via histogram voting. As demonstrated on both simulated and
real data, our algorithm achieves accurate and robust estimates of the
vehicle's instantaneous rotational velocity, and thus results that are
comparable to the delta rotations obtained by frame-based sensors under normal
conditions. We furthermore significantly outperform the more traditional
alternatives in challenging illumination scenarios. The code is available at
\url{https://github.com/gowanting/NHEVO}.
Related papers
- AsynEIO: Asynchronous Monocular Event-Inertial Odometry Using Gaussian Process Regression [7.892365588256595]
We introduce a monocular event-inertial odometry method called AsynEIO, designed to fuse asynchronous event and inertial data.
We show that AsynEIO outperforms existing methods, especially in high-speed and low-illumination scenarios.
arXiv Detail & Related papers (2024-11-19T02:39:57Z) - EVIT: Event-based Visual-Inertial Tracking in Semi-Dense Maps Using Windowed Nonlinear Optimization [19.915476815328294]
Event cameras are an interesting visual exteroceptive sensor that reacts to brightness changes rather than integrating absolute image intensities.
This paper proposes the addition of inertial signals in order to robustify the estimation.
Our evaluation focuses on a diverse set of real world sequences and comprises a comparison of our proposed method against a purely event-based alternative running at different rates.
arXiv Detail & Related papers (2024-08-02T16:24:55Z) - Event-Aided Time-to-Collision Estimation for Autonomous Driving [28.13397992839372]
We present a novel method that estimates the time to collision using a neuromorphic event-based camera.
The proposed algorithm consists of a two-step approach for efficient and accurate geometric model fitting on event data.
Experiments on both synthetic and real data demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-07-10T02:37:36Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a novel driving view synthesis dataset and benchmark specifically designed for autonomous driving simulations.
The dataset is unique as it includes testing images captured by deviating from the training trajectory by 1-4 meters.
We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multi-camera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - NAVSIM: Data-Driven Non-Reactive Autonomous Vehicle Simulation and Benchmarking [65.24988062003096]
We present NAVSIM, a framework for benchmarking vision-based driving policies.
Our simulation is non-reactive, i.e., the evaluated policy and environment do not influence each other.
NAVSIM enabled a new competition held at CVPR 2024, where 143 teams submitted 463 entries, resulting in several new insights.
arXiv Detail & Related papers (2024-06-21T17:59:02Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.
DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.
Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - Tight Fusion of Events and Inertial Measurements for Direct Velocity
Estimation [20.002238735553792]
We propose a novel solution to tight visual-inertial fusion directly at the level of first-order kinematics by employing a dynamic vision sensor instead of a normal camera.
We demonstrate how velocity estimates in highly dynamic situations can be obtained over short time intervals.
Experiments on both simulated and real data demonstrate that the proposed tight event-inertial fusion leads to continuous and reliable velocity estimation.
arXiv Detail & Related papers (2024-01-17T15:56:57Z) - Asynchronous Blob Tracker for Event Cameras [5.64242497932732]
Event-based cameras are popular for tracking fast-moving objects due to their high temporal resolution, low latency and high range.
We propose a novel algorithm for tracking blobs using raw events asynchronously in real time.
Our algorithm achieves highly accurate blob tracking, velocity estimation, and shape estimation even under challenging lighting conditions.
arXiv Detail & Related papers (2023-07-20T05:15:03Z) - DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and
Temporal Relatedness [78.98998551326812]
We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors.
We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis.
We show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.
arXiv Detail & Related papers (2022-09-26T21:59:14Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.