A 5-Point Minimal Solver for Event Camera Relative Motion Estimation
- URL: http://arxiv.org/abs/2309.17054v1
- Date: Fri, 29 Sep 2023 08:30:18 GMT
- Title: A 5-Point Minimal Solver for Event Camera Relative Motion Estimation
- Authors: Ling Gao and Hang Su and Daniel Gehrig and Marco Cannici and Davide
Scaramuzza and Laurent Kneip
- Abstract summary: We introduce a novel minimal 5-point solver that estimates line parameters and linear camera velocity projections, which can be fused into a single, averaged linear velocity when considering multiple lines.
Our method consistently achieves a 100% success rate in estimating linear velocity where existing closed-form solvers only achieve between 23% and 70%.
- Score: 47.45081895021988
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event-based cameras are ideal for line-based motion estimation, since they
predominantly respond to edges in the scene. However, accurately determining
the camera displacement based on events continues to be an open problem. This
is because line feature extraction and dynamics estimation are tightly coupled
when using event cameras, and no precise model is currently available for
describing the complex structures generated by lines in the space-time volume
of events. We solve this problem by deriving the correct non-linear
parametrization of such manifolds, which we term eventails, and demonstrate its
application to event-based linear motion estimation, with known rotation from
an Inertial Measurement Unit. Using this parametrization, we introduce a novel
minimal 5-point solver that jointly estimates line parameters and linear camera
velocity projections, which can be fused into a single, averaged linear
velocity when considering multiple lines. We demonstrate on both synthetic and
real data that our solver generates more stable relative motion estimates than
other methods while capturing more inliers than clustering based on
spatio-temporal planes. In particular, our method consistently achieves a 100%
success rate in estimating linear velocity where existing closed-form solvers
only achieve between 23% and 70%. The proposed eventails contribute to a better
understanding of spatio-temporal event-generated geometries and we thus believe
it will become a core building block of future event-based motion estimation
algorithms.
Related papers
- ESVO2: Direct Visual-Inertial Odometry with Stereo Event Cameras [33.81592783496106]
Event-based visual odometry aims at solving tracking and mapping sub-problems in parallel.
We build an event-based stereo visual-inertial odometry system on top of our previous direct pipeline Event-based Stereo Visual Odometry.
arXiv Detail & Related papers (2024-10-12T05:35:27Z) - Event-Aided Time-to-Collision Estimation for Autonomous Driving [28.13397992839372]
We present a novel method that estimates the time to collision using a neuromorphic event-based camera.
The proposed algorithm consists of a two-step approach for efficient and accurate geometric model fitting on event data.
Experiments on both synthetic and real data demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-07-10T02:37:36Z) - An N-Point Linear Solver for Line and Motion Estimation with Event Cameras [45.67822962085412]
Event cameras respond primarily to edges--formed by strong gradients--that are well-suited for motion estimation.
Recent work has shown that events generated by single lines satisfy a novel constraint which describes a manifold in space-time volume.
We show that, with suitable line parametrization, this system of constraints is actually linear in the unknowns.
arXiv Detail & Related papers (2024-04-01T00:47:02Z) - Tight Fusion of Events and Inertial Measurements for Direct Velocity
Estimation [20.002238735553792]
We propose a novel solution to tight visual-inertial fusion directly at the level of first-order kinematics by employing a dynamic vision sensor instead of a normal camera.
We demonstrate how velocity estimates in highly dynamic situations can be obtained over short time intervals.
Experiments on both simulated and real data demonstrate that the proposed tight event-inertial fusion leads to continuous and reliable velocity estimation.
arXiv Detail & Related papers (2024-01-17T15:56:57Z) - Vanishing Point Estimation in Uncalibrated Images with Prior Gravity
Direction [82.72686460985297]
We tackle the problem of estimating a Manhattan frame.
We derive two new 2-line solvers, one of which does not suffer from singularities affecting existing solvers.
We also design a new non-minimal method, running on an arbitrary number of lines, to boost the performance in local optimization.
arXiv Detail & Related papers (2023-08-21T13:03:25Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Continuous Event-Line Constraint for Closed-Form Velocity Initialization [0.0]
Event cameras trigger events asynchronously and independently upon a sufficient change of the logarithmic brightness level.
We propose the continuous event-line constraint, which relies on a constant-velocity motion assumption as well as trifocal geometry in order to express a relationship between line observations given by event clusters as well as first-order camera dynamics.
arXiv Detail & Related papers (2021-09-09T14:39:56Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Pushing the Envelope of Rotation Averaging for Visual SLAM [69.7375052440794]
We propose a novel optimization backbone for visual SLAM systems.
We leverage averaging to improve the accuracy, efficiency and robustness of conventional monocular SLAM systems.
Our approach can exhibit up to 10x faster with comparable accuracy against the state-art on public benchmarks.
arXiv Detail & Related papers (2020-11-02T18:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.