Event-based Photometric Bundle Adjustment
- URL: http://arxiv.org/abs/2412.14111v1
- Date: Wed, 18 Dec 2024 17:58:16 GMT
- Title: Event-based Photometric Bundle Adjustment
- Authors: Shuang Guo, Guillermo Gallego,
- Abstract summary: Event-based Photometric Bundle Adjustment (EPBA) is the first event-only photometric bundle adjustment method.
EPBA is effective in decreasing the photometric error (by up to 90%)
Experiments on modern high-resolution event cameras show the applicability of EPBA to panoramic imaging.
- Score: 12.504055397619727
- License:
- Abstract: We tackle the problem of bundle adjustment (i.e., simultaneous refinement of camera poses and scene map) for a purely rotating event camera. Starting from first principles, we formulate the problem as a classical non-linear least squares optimization. The photometric error is defined using the event generation model directly in the camera rotations and the semi-dense scene brightness that triggers the events. We leverage the sparsity of event data to design a tractable Levenberg-Marquardt solver that handles the very large number of variables involved. To the best of our knowledge, our method, which we call Event-based Photometric Bundle Adjustment (EPBA), is the first event-only photometric bundle adjustment method that works on the brightness map directly and exploits the space-time characteristics of event data, without having to convert events into image-like representations. Comprehensive experiments on both synthetic and real-world datasets demonstrate EPBA's effectiveness in decreasing the photometric error (by up to 90%), yielding results of unparalleled quality. The refined maps reveal details that were hidden using prior state-of-the-art rotation-only estimation methods. The experiments on modern high-resolution event cameras show the applicability of EPBA to panoramic imaging in various scenarios (without map initialization, at multiple resolutions, and in combination with other methods, such as IMU dead reckoning or previous event-based rotation estimation methods). We make the source code publicly available. https://github.com/tub-rip/epba
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Event-based Mosaicing Bundle Adjustment [12.504055397619727]
We tackle the problem of mosaicing bundle adjustment (i.e., simultaneous refinement of camera orientations and scene map) for a purely rotating event camera.
We show that this BA optimization has an exploitable block-diagonal sparsity structure, so that the problem can be solved efficiently.
We evaluate our method, called EMBA, on both synthetic and real-world datasets to show its effectiveness.
arXiv Detail & Related papers (2024-09-11T15:53:01Z) - CMax-SLAM: Event-based Rotational-Motion Bundle Adjustment and SLAM
System using Contrast Maximization [14.771885020122062]
Event cameras are bio-inspired visual sensors that capture pixel-wise intensity changes and output asynchronous event streams.
This paper considers the problem of rotational motion estimation using event cameras.
Several event-based rotation estimation methods have been developed in the past decade, but their performance has not been evaluated.
arXiv Detail & Related papers (2024-03-12T23:05:10Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Learning Optical Flow from Event Camera with Rendered Dataset [45.4342948504988]
We propose to render a physically correct event-flow dataset using computer graphics models.
In particular, we first create indoor and outdoor 3D scenes by Blender with rich scene content variations.
arXiv Detail & Related papers (2023-03-20T10:44:32Z) - Secrets of Event-Based Optical Flow [13.298845944779108]
Event cameras respond to scene dynamics and offer advantages to estimate motion.
We develop a principled method to extend the Contrast Maximization framework to estimate optical flow from events alone.
Our method ranks first among unsupervised methods on the MVSEC benchmark, and is competitive on the DSEC benchmark.
arXiv Detail & Related papers (2022-07-20T16:40:38Z) - Event-aided Direct Sparse Odometry [54.602311491827805]
We introduce EDS, a direct monocular visual odometry using events and frames.
Our algorithm leverages the event generation model to track the camera motion in the blind time between frames.
EDS is the first method to perform 6-DOF VO using events and frames with a direct approach.
arXiv Detail & Related papers (2022-04-15T20:40:29Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - The Spatio-Temporal Poisson Point Process: A Simple Model for the
Alignment of Event Camera Data [19.73526916714181]
Event cameras provide a natural and data efficient representation of visual information.
We propose a new model of event data that captures its natural-temporal structure.
We show new state of the art accuracy for rotational velocity estimation on the DAVIS 240C dataset.
arXiv Detail & Related papers (2021-06-13T00:43:27Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.