EBAD-Gaussian: Event-driven Bundle Adjusted Deblur Gaussian Splatting
- URL: http://arxiv.org/abs/2504.10012v1
- Date: Mon, 14 Apr 2025 09:17:00 GMT
- Title: EBAD-Gaussian: Event-driven Bundle Adjusted Deblur Gaussian Splatting
- Authors: Yufei Deng, Yuanjian Wang, Rong Xiao, Chenwei Tang, Jizhe Zhou, Jiahao Fan, Deng Xiong, Jiancheng Lv, Huajin Tang,
- Abstract summary: Event-driven Bundle Adjusted Deblur Gaussian Splatting (EBAD-Gaussian)<n>EBAD-Gaussian reconstructs sharp 3D Gaussians from event streams and severely blurred images.<n>Experiments on synthetic and real-world datasets show that EBAD-Gaussian can achieve high-quality 3D scene reconstruction.
- Score: 21.46091843175779
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While 3D Gaussian Splatting (3D-GS) achieves photorealistic novel view synthesis, its performance degrades with motion blur. In scenarios with rapid motion or low-light conditions, existing RGB-based deblurring methods struggle to model camera pose and radiance changes during exposure, reducing reconstruction accuracy. Event cameras, capturing continuous brightness changes during exposure, can effectively assist in modeling motion blur and improving reconstruction quality. Therefore, we propose Event-driven Bundle Adjusted Deblur Gaussian Splatting (EBAD-Gaussian), which reconstructs sharp 3D Gaussians from event streams and severely blurred images. This method jointly learns the parameters of these Gaussians while recovering camera motion trajectories during exposure time. Specifically, we first construct a blur loss function by synthesizing multiple latent sharp images during the exposure time, minimizing the difference between real and synthesized blurred images. Then we use event stream to supervise the light intensity changes between latent sharp images at any time within the exposure period, supplementing the light intensity dynamic changes lost in RGB images. Furthermore, we optimize the latent sharp images at intermediate exposure times based on the event-based double integral (EDI) prior, applying consistency constraints to enhance the details and texture information of the reconstructed images. Extensive experiments on synthetic and real-world datasets show that EBAD-Gaussian can achieve high-quality 3D scene reconstruction under the condition of blurred images and event stream inputs.
Related papers
- SaENeRF: Suppressing Artifacts in Event-based Neural Radiance Fields [12.428456822446947]
Event cameras offer advantages such as low latency, low power consumption, low bandwidth, and high dynamic range.
Reconstructing geometrically consistent and photometrically accurate 3D representations from event data remains fundamentally challenging.
We present SaENeRF, a novel self-supervised framework that effectively suppresses artifacts and enables 3D-consistent, dense radiance, and photorealistic NeRF reconstruction of static scenes solely from event streams.
arXiv Detail & Related papers (2025-04-23T03:33:20Z) - Low-Light Image Enhancement using Event-Based Illumination Estimation [83.81648559951684]
Low-light image enhancement (LLIE) aims to improve the visibility of images captured in poorly lit environments.<n>This paper opens a new avenue from the perspective of estimating the illumination using ''temporal-mapping'' events.<n>We construct a beam-splitter setup and collect EvLowLight dataset that includes images, temporal-mapping events, and motion events.
arXiv Detail & Related papers (2025-04-13T00:01:33Z) - D3DR: Lighting-Aware Object Insertion in Gaussian Splatting [48.80431740983095]
We propose a method, dubbed D3DR, for inserting a 3DGS-parametrized object into 3DGS scenes.
We leverage advances in diffusion models, which, trained on real-world data, implicitly understand correct scene lighting.
We demonstrate the method's effectiveness by comparing it to existing approaches.
arXiv Detail & Related papers (2025-03-09T19:48:00Z) - SweepEvGS: Event-Based 3D Gaussian Splatting for Macro and Micro Radiance Field Rendering from a Single Sweep [48.34647667445792]
SweepEvGS is a novel hardware-integrated method that leverages event cameras for robust and accurate novel view synthesis from a single sweep.<n>We validate the robustness and efficiency of SweepEvGS through experiments in three different imaging settings.<n>Our results demonstrate that SweepEvGS surpasses existing methods in visual rendering quality, rendering speed, and computational efficiency.
arXiv Detail & Related papers (2024-12-16T09:09:42Z) - E-3DGS: Gaussian Splatting with Exposure and Motion Events [29.042018288378447]
E-3DGS sets a new benchmark for event-based 3D reconstruction with robust performance in challenging conditions.
We introduce EME-3D, a real-world 3D dataset with exposure events, motion events, camera calibration parameters, and sparse point clouds.
arXiv Detail & Related papers (2024-10-22T13:17:20Z) - EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [72.60992807941885]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.<n>We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.<n>We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Deblurring Neural Radiance Fields with Event-driven Bundle Adjustment [23.15130387716121]
We propose Bundle Adjustment for Deblurring Neural Radiance Fields (EBAD-NeRF) to jointly optimize the learnable poses and NeRF parameters.
EBAD-NeRF can obtain accurate camera trajectory during the exposure time and learn a sharper 3D representations compared to prior works.
arXiv Detail & Related papers (2024-06-20T14:33:51Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - EvaGaussians: Event Stream Assisted Gaussian Splatting from Blurry Images [36.91327728871551]
3D Gaussian Splatting (3D-GS) has demonstrated exceptional capabilities in 3D scene reconstruction and novel view synthesis.<n>We introduce Event Stream Assisted Gaussian Splatting (EvaGaussians), a novel approach that integrates event streams captured by an event camera to assist in reconstructing high-quality 3D-GS from blurry images.
arXiv Detail & Related papers (2024-05-29T04:59:27Z) - BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting [8.380954205255104]
BAD-Gaussians is a novel approach to handle severe motion-blurred images with inaccurate camera poses.
Our method achieves superior rendering quality compared to previous state-of-the-art deblur neural rendering methods.
arXiv Detail & Related papers (2024-03-18T14:43:04Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.