Dark-EvGS: Event Camera as an Eye for Radiance Field in the Dark
- URL: http://arxiv.org/abs/2507.11931v1
- Date: Wed, 16 Jul 2025 05:54:33 GMT
- Title: Dark-EvGS: Event Camera as an Eye for Radiance Field in the Dark
- Authors: Jingqian Wu, Peiqi Duan, Zongqiang Wang, Changwei Wang, Boxin Shi, Edmund Y. Lam,
- Abstract summary: We propose Dark-EvGS, the first event-assisted 3D GS framework that enables the reconstruction of bright frames from arbitrary viewpoints.<n>Our method achieves better results than existing methods, conquering radiance field reconstruction under challenging low-light conditions.
- Score: 51.68144172958247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In low-light environments, conventional cameras often struggle to capture clear multi-view images of objects due to dynamic range limitations and motion blur caused by long exposure. Event cameras, with their high-dynamic range and high-speed properties, have the potential to mitigate these issues. Additionally, 3D Gaussian Splatting (GS) enables radiance field reconstruction, facilitating bright frame synthesis from multiple viewpoints in low-light conditions. However, naively using an event-assisted 3D GS approach still faced challenges because, in low light, events are noisy, frames lack quality, and the color tone may be inconsistent. To address these issues, we propose Dark-EvGS, the first event-assisted 3D GS framework that enables the reconstruction of bright frames from arbitrary viewpoints along the camera trajectory. Triplet-level supervision is proposed to gain holistic knowledge, granular details, and sharp scene rendering. The color tone matching block is proposed to guarantee the color consistency of the rendered frames. Furthermore, we introduce the first real-captured dataset for the event-guided bright frame synthesis task via 3D GS-based radiance field reconstruction. Experiments demonstrate that our method achieves better results than existing methods, conquering radiance field reconstruction under challenging low-light conditions. The code and sample data are included in the supplementary material.
Related papers
- MV-CoLight: Efficient Object Compositing with Consistent Lighting and Shadow Generation [19.46962637673285]
MV-CoLight is a framework for illumination-consistent object compositing in 2D and 3D scenes.<n>We employ a Hilbert curve-based mapping to align 2D image inputs with 3D Gaussian scene representations seamlessly.<n> Experiments demonstrate state-of-the-art harmonized results across standard benchmarks and our dataset.
arXiv Detail & Related papers (2025-05-27T17:53:02Z) - EBAD-Gaussian: Event-driven Bundle Adjusted Deblur Gaussian Splatting [21.46091843175779]
Event-driven Bundle Adjusted Deblur Gaussian Splatting (EBAD-Gaussian)<n>EBAD-Gaussian reconstructs sharp 3D Gaussians from event streams and severely blurred images.<n>Experiments on synthetic and real-world datasets show that EBAD-Gaussian can achieve high-quality 3D scene reconstruction.
arXiv Detail & Related papers (2025-04-14T09:17:00Z) - Luminance-GS: Adapting 3D Gaussian Splatting to Challenging Lighting Conditions with View-Adaptive Curve Adjustment [46.60106452798745]
We introduce Luminance-GS, a novel approach to achieving high-quality novel view synthesis results under challenging lighting conditions using 3DGS.<n>By adopting per-view color matrix mapping and view-adaptive curve adjustments, Luminance-GS achieves state-of-the-art (SOTA) results across various lighting conditions.<n>Compared to previous NeRF- and 3DGS-based baselines, Luminance-GS provides real-time rendering speed with improved reconstruction quality.
arXiv Detail & Related papers (2025-04-02T08:54:57Z) - E-3DGS: Event-Based Novel View Rendering of Large-Scale Scenes Using 3D Gaussian Splatting [23.905254854888863]
We introduce 3D Gaussians for event-based novel view synthesis.<n>Our method reconstructs large and unbounded scenes with high visual quality.<n>We contribute the first real and synthetic event datasets tailored for this setting.
arXiv Detail & Related papers (2025-02-15T15:04:10Z) - SweepEvGS: Event-Based 3D Gaussian Splatting for Macro and Micro Radiance Field Rendering from a Single Sweep [48.34647667445792]
SweepEvGS is a novel hardware-integrated method that leverages event cameras for robust and accurate novel view synthesis from a single sweep.<n>We validate the robustness and efficiency of SweepEvGS through experiments in three different imaging settings.<n>Our results demonstrate that SweepEvGS surpasses existing methods in visual rendering quality, rendering speed, and computational efficiency.
arXiv Detail & Related papers (2024-12-16T09:09:42Z) - E-3DGS: Gaussian Splatting with Exposure and Motion Events [29.042018288378447]
E-3DGS sets a new benchmark for event-based 3D reconstruction with robust performance in challenging conditions.<n>We introduce EME-3D, a real-world 3D dataset with exposure events, motion events, camera calibration parameters, and sparse point clouds.
arXiv Detail & Related papers (2024-10-22T13:17:20Z) - EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [72.60992807941885]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.<n>We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.<n>We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - Complementing Event Streams and RGB Frames for Hand Mesh Reconstruction [51.87279764576998]
We propose EvRGBHand -- the first approach for 3D hand mesh reconstruction with an event camera and an RGB camera compensating for each other.
EvRGBHand can tackle overexposure and motion blur issues in RGB-based HMR and foreground scarcity and background overflow issues in event-based HMR.
arXiv Detail & Related papers (2024-03-12T06:04:50Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - EventNeRF: Neural Radiance Fields from a Single Colour Event Camera [81.19234142730326]
This paper proposes the first approach for 3D-consistent, dense and novel view synthesis using just a single colour event stream as input.
At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels.
We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings.
arXiv Detail & Related papers (2022-06-23T17:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.