DEGS: Deformable Event-based 3D Gaussian Splatting from RGB and Event Stream
- URL: http://arxiv.org/abs/2510.07752v1
- Date: Thu, 09 Oct 2025 03:43:27 GMT
- Title: DEGS: Deformable Event-based 3D Gaussian Splatting from RGB and Event Stream
- Authors: Junhao He, Jiaxu Wang, Jia Li, Mingyuan Sun, Qiang Zhang, Jiahang Cao, Ziyi Zhang, Yi Gu, Jingkai Sun, Renjing Xu,
- Abstract summary: This paper introduces a novel framework that jointly optimize Dynamic 3DGS from the two modalities.<n>The key idea is to adopt event motion priors to guide the optimization of the deformation fields.<n>Our method outperforms existing image and event-based approaches across synthetic and real scenes.
- Score: 46.467919920459536
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing Dynamic 3D Gaussian Splatting (3DGS) from low-framerate RGB videos is challenging. This is because large inter-frame motions will increase the uncertainty of the solution space. For example, one pixel in the first frame might have more choices to reach the corresponding pixel in the second frame. Event cameras can asynchronously capture rapid visual changes and are robust to motion blur, but they do not provide color information. Intuitively, the event stream can provide deterministic constraints for the inter-frame large motion by the event trajectories. Hence, combining low-temporal-resolution images with high-framerate event streams can address this challenge. However, it is challenging to jointly optimize Dynamic 3DGS using both RGB and event modalities due to the significant discrepancy between these two data modalities. This paper introduces a novel framework that jointly optimizes dynamic 3DGS from the two modalities. The key idea is to adopt event motion priors to guide the optimization of the deformation fields. First, we extract the motion priors encoded in event streams by using the proposed LoCM unsupervised fine-tuning framework to adapt an event flow estimator to a certain unseen scene. Then, we present the geometry-aware data association method to build the event-Gaussian motion correspondence, which is the primary foundation of the pipeline, accompanied by two useful strategies, namely motion decomposition and inter-frame pseudo-label. Extensive experiments show that our method outperforms existing image and event-based approaches across synthetic and real scenes and prove that our method can effectively optimize dynamic 3DGS with the help of event data.
Related papers
- Geometric-Photometric Event-based 3D Gaussian Ray Tracing [36.13207105469902]
Event cameras offer a high temporal resolution over traditional frame-based cameras.<n>It has been unclear how event-based 3D Gaussian Splatting approaches could leverage fine-grained temporal information of sparse events.<n>This work proposes a framework to address the trade-off between accuracy and temporal resolution in event-based 3DGS.
arXiv Detail & Related papers (2025-12-21T08:31:02Z) - GS2E: Gaussian Splatting is an Effective Data Generator for Event Stream Generation [32.13436507983477]
We introduce GS2E (Gaussian Splatting to Event), a large-scale synthetic event dataset for high-fidelity event vision tasks.<n>Results on event-based 3D reconstruction demonstrate GS2E's superior generalization capabilities and its practical value.
arXiv Detail & Related papers (2025-05-21T09:15:42Z) - In-2-4D: Inbetweening from Two Single-View Images to 4D Generation [63.68181731564576]
We propose a new problem, Inbetween-2-4D, for generative 4D (i.e., 3D + motion) in interpolate two single-view images.<n>In contrast to video/4D generation from only text or a single image, our interpolative task can leverage more precise motion control to better constrain the generation.
arXiv Detail & Related papers (2025-04-11T09:01:09Z) - DiET-GS: Diffusion Prior and Event Stream-Assisted Motion Deblurring 3D Gaussian Splatting [59.91048302471001]
We present DiET-GS, a diffusion prior and event stream-assisted motion deblurring 3DGS.<n>Our framework effectively leverages both blur-free event streams and diffusion prior in a two-stage training strategy.
arXiv Detail & Related papers (2025-03-31T15:27:07Z) - Event-boosted Deformable 3D Gaussians for Dynamic Scene Reconstruction [50.873820265165975]
We introduce the first approach combining event cameras, which capture high-temporal-resolution, continuous motion data, with deformable 3D-GS for dynamic scene reconstruction.<n>We propose a GS-Threshold Joint Modeling strategy, creating a mutually reinforcing process that greatly improves both 3D reconstruction and threshold modeling.<n>We contribute the first event-inclusive 4D benchmark with synthetic and real-world dynamic scenes, on which our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-11-25T08:23:38Z) - EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [72.60992807941885]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.<n>We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.<n>We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Elite-EvGS: Learning Event-based 3D Gaussian Splatting by Distilling Event-to-Video Priors [8.93657924734248]
Event cameras are bio-inspired sensors that output asynchronous and sparse event streams, instead of fixed frames.
We propose a novel event-based 3DGS framework, named Elite-EvGS.
Our key idea is to distill the prior knowledge from the off-the-shelf event-to-video (E2V) models to effectively reconstruct 3D scenes from events.
arXiv Detail & Related papers (2024-09-20T10:47:52Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - Differentiable Event Stream Simulator for Non-Rigid 3D Tracking [82.56690776283428]
Our differentiable simulator enables non-rigid 3D tracking of deformable objects from event streams.
We show the effectiveness of our approach for various types of non-rigid objects and compare to existing methods for non-rigid 3D tracking.
arXiv Detail & Related papers (2021-04-30T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.