EventNeuS: 3D Mesh Reconstruction from a Single Event Camera
- URL: http://arxiv.org/abs/2602.03847v1
- Date: Tue, 03 Feb 2026 18:59:57 GMT
- Title: EventNeuS: 3D Mesh Reconstruction from a Single Event Camera
- Authors: Shreyas Sachan, Viktor Rudnev, Mohamed Elgharib, Christian Theobalt, Vladislav Golyanik,
- Abstract summary: Event cameras offer an alternative to RGB cameras in many scenarios.<n>We present EventNeuS, a self-supervised neural model for learning 3D representations from monocular colour event streams.
- Score: 69.48085586670054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras offer a considerable alternative to RGB cameras in many scenarios. While there are recent works on event-based novel-view synthesis, dense 3D mesh reconstruction remains scarcely explored and existing event-based techniques are severely limited in their 3D reconstruction accuracy. To address this limitation, we present EventNeuS, a self-supervised neural model for learning 3D representations from monocular colour event streams. Our approach, for the first time, combines 3D signed distance function and density field learning with event-based supervision. Furthermore, we introduce spherical harmonics encodings into our model for enhanced handling of view-dependent effects. EventNeuS outperforms existing approaches by a significant margin, achieving 34% lower Chamfer distance and 31% lower mean absolute error on average compared to the best previous method.
Related papers
- DeblurSplat: SfM-free 3D Gaussian Splatting with Event Camera for Robust Deblurring [50.21760380168387]
We propose the first Structure-from-Motion (SfM)-free deblurring 3D Gaussian Splatting method via event camera, dubbed DeSplat.<n>We leverage the pretrained capability of the dense stereo module (DUSt3R) to directly obtain accurate initial point clouds from blurred images.
arXiv Detail & Related papers (2025-09-23T11:21:54Z) - EventEgoHands: Event-based Egocentric 3D Hand Mesh Reconstruction [2.3695551082138864]
Reconstructing 3D hand mesh is challenging but an important task for human-computer interaction and AR/VR applications.<n>We propose EventEgoHands, a novel method for event-based 3D hand mesh reconstruction in an egocentric view.<n>Our approach introduces a Hand Module that extracts hand regions, effectively mitigating the influence of dynamic background events.
arXiv Detail & Related papers (2025-05-25T14:36:51Z) - E-3DGS: Event-Based Novel View Rendering of Large-Scale Scenes Using 3D Gaussian Splatting [23.905254854888863]
We introduce 3D Gaussians for event-based novel view synthesis.<n>Our method reconstructs large and unbounded scenes with high visual quality.<n>We contribute the first real and synthetic event datasets tailored for this setting.
arXiv Detail & Related papers (2025-02-15T15:04:10Z) - EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [87.1077910795879]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.<n>We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.<n>We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - IncEventGS: Pose-Free Gaussian Splatting from a Single Event Camera [6.879406129086464]
IncEventGS is an incremental 3D Gaussian splatting reconstruction algorithm with a single event camera.<n>We exploit the tracking and mapping paradigm of conventional SLAM pipelines for IncEventGS.
arXiv Detail & Related papers (2024-10-10T16:54:23Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - 3D Pose Estimation of Two Interacting Hands from a Monocular Event
Camera [59.846927201816776]
This paper introduces the first framework for 3D tracking of two fast-moving and interacting hands from a single monocular event camera.
Our approach tackles the left-right hand ambiguity with a novel semi-supervised feature-wise attention mechanism and integrates an intersection loss to fix hand collisions.
arXiv Detail & Related papers (2023-12-21T18:59:57Z) - EventNeRF: Neural Radiance Fields from a Single Colour Event Camera [81.19234142730326]
This paper proposes the first approach for 3D-consistent, dense and novel view synthesis using just a single colour event stream as input.
At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels.
We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings.
arXiv Detail & Related papers (2022-06-23T17:59:53Z) - EventHands: Real-Time Neural 3D Hand Reconstruction from an Event Stream [80.15360180192175]
3D hand pose estimation from monocular videos is a long-standing and challenging problem.
We address it for the first time using a single event camera, i.e., an asynchronous vision sensor reacting on brightness changes.
Our approach has characteristics previously not demonstrated with a single RGB or depth camera.
arXiv Detail & Related papers (2020-12-11T16:45:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.