E2GS: Event Enhanced Gaussian Splatting
- URL: http://arxiv.org/abs/2406.14978v1
- Date: Fri, 21 Jun 2024 08:43:47 GMT
- Title: E2GS: Event Enhanced Gaussian Splatting
- Authors: Hiroyuki Deguchi, Mana Masuda, Takuya Nakabayashi, Hideo Saito,
- Abstract summary: Event Enhanced Gaussian Splatting (E2GS) is a novel method that incorporates event data into Gaussian Splatting.
Our E2GS effectively utilizes both blurry images and event data, significantly improving image deblurring and producing high-quality novel view synthesis.
- Score: 9.096805985896625
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event cameras, known for their high dynamic range, absence of motion blur, and low energy usage, have recently found a wide range of applications thanks to these attributes. In the past few years, the field of event-based 3D reconstruction saw remarkable progress, with the Neural Radiance Field (NeRF) based approach demonstrating photorealistic view synthesis results. However, the volume rendering paradigm of NeRF necessitates extensive training and rendering times. In this paper, we introduce Event Enhanced Gaussian Splatting (E2GS), a novel method that incorporates event data into Gaussian Splatting, which has recently made significant advances in the field of novel view synthesis. Our E2GS effectively utilizes both blurry images and event data, significantly improving image deblurring and producing high-quality novel view synthesis. Our comprehensive experiments on both synthetic and real-world datasets demonstrate our E2GS can generate visually appealing renderings while offering faster training and rendering speed (140 FPS). Our code is available at https://github.com/deguchihiroyuki/E2GS.
Related papers
- E-3DGS: Event-Based Novel View Rendering of Large-Scale Scenes Using 3D Gaussian Splatting [23.905254854888863]
We introduce 3D Gaussians for event-based novel view synthesis.
Our method reconstructs large and unbounded scenes with high visual quality.
We contribute the first real and synthetic event datasets tailored for this setting.
arXiv Detail & Related papers (2025-02-15T15:04:10Z) - BeSplat: Gaussian Splatting from a Single Blurry Image and Event Stream [13.649334929746413]
3D Gaussian Splatting (3DGS) has effectively addressed key challenges, such as long training times and slow rendering speeds.
We demonstrate the recovery of sharp radiance field (Gaussian splats) from a single motion-blurred image and its corresponding event stream.
arXiv Detail & Related papers (2024-12-26T22:35:29Z) - SweepEvGS: Event-Based 3D Gaussian Splatting for Macro and Micro Radiance Field Rendering from a Single Sweep [48.34647667445792]
SweepEvGS is a novel hardware-integrated method that leverages event cameras for robust and accurate novel view synthesis from a single sweep.
We validate the robustness and efficiency of SweepEvGS through experiments in three different imaging settings.
Our results demonstrate that SweepEvGS surpasses existing methods in visual rendering quality, rendering speed, and computational efficiency.
arXiv Detail & Related papers (2024-12-16T09:09:42Z) - Superpoint Gaussian Splatting for Real-Time High-Fidelity Dynamic Scene Reconstruction [10.208558194785017]
We propose a novel framework named Superpoint Gaussian Splatting (SP-GS)
Our framework first reconstructs the scene and then clusters Gaussians with similar properties into superpoints.
Empowered by these superpoints, our method manages to extend 3D Gaussian splatting to dynamic scenes with only a slight increase in computational expense.
arXiv Detail & Related papers (2024-06-06T02:32:41Z) - Dynamic 3D Gaussian Fields for Urban Areas [60.64840836584623]
We present an efficient neural 3D scene representation for novel-view synthesis (NVS) in large-scale, dynamic urban areas.
We propose 4DGF, a neural scene representation that scales to large-scale dynamic urban areas.
arXiv Detail & Related papers (2024-06-05T12:07:39Z) - BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting [8.380954205255104]
BAD-Gaussians is a novel approach to handle severe motion-blurred images with inaccurate camera poses.
Our method achieves superior rendering quality compared to previous state-of-the-art deblur neural rendering methods.
arXiv Detail & Related papers (2024-03-18T14:43:04Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z) - Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis [28.455719771979876]
We propose Spacetime Gaussian Feature Splatting as a novel dynamic scene representation.
Our method achieves state-of-the-art rendering quality and speed, while retaining compact storage.
arXiv Detail & Related papers (2023-12-28T04:14:55Z) - FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting [58.41056963451056]
We propose a few-shot view synthesis framework based on 3D Gaussian Splatting.
This framework enables real-time and photo-realistic view synthesis with as few as three training views.
FSGS achieves state-of-the-art performance in both accuracy and rendering efficiency across diverse datasets.
arXiv Detail & Related papers (2023-12-01T09:30:02Z) - 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering [103.32717396287751]
We propose 4D Gaussian Splatting (4D-GS) as a holistic representation for dynamic scenes.
A neuralvoxel encoding algorithm inspired by HexPlane is proposed to efficiently build features from 4D neural voxels.
Our 4D-GS method achieves real-time rendering under high resolutions, 82 FPS at an 800$times$800 resolution on an 3090 GPU.
arXiv Detail & Related papers (2023-10-12T17:21:41Z) - EventNeRF: Neural Radiance Fields from a Single Colour Event Camera [81.19234142730326]
This paper proposes the first approach for 3D-consistent, dense and novel view synthesis using just a single colour event stream as input.
At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels.
We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings.
arXiv Detail & Related papers (2022-06-23T17:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.