DiET-GS: Diffusion Prior and Event Stream-Assisted Motion Deblurring 3D Gaussian Splatting
- URL: http://arxiv.org/abs/2503.24210v1
- Date: Mon, 31 Mar 2025 15:27:07 GMT
- Title: DiET-GS: Diffusion Prior and Event Stream-Assisted Motion Deblurring 3D Gaussian Splatting
- Authors: Seungjun Lee, Gim Hee Lee,
- Abstract summary: We present DiET-GS, a diffusion prior and event stream-assisted motion deblurring 3DGS.<n>Our framework effectively leverages both blur-free event streams and diffusion prior in a two-stage training strategy.
- Score: 59.91048302471001
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing sharp 3D representations from blurry multi-view images are long-standing problem in computer vision. Recent works attempt to enhance high-quality novel view synthesis from the motion blur by leveraging event-based cameras, benefiting from high dynamic range and microsecond temporal resolution. However, they often reach sub-optimal visual quality in either restoring inaccurate color or losing fine-grained details. In this paper, we present DiET-GS, a diffusion prior and event stream-assisted motion deblurring 3DGS. Our framework effectively leverages both blur-free event streams and diffusion prior in a two-stage training strategy. Specifically, we introduce the novel framework to constraint 3DGS with event double integral, achieving both accurate color and well-defined details. Additionally, we propose a simple technique to leverage diffusion prior to further enhance the edge details. Qualitative and quantitative results on both synthetic and real-world data demonstrate that our DiET-GS is capable of producing significantly better quality of novel views compared to the existing baselines. Our project page is https://diet-gs.github.io
Related papers
- E-3DGS: Event-Based Novel View Rendering of Large-Scale Scenes Using 3D Gaussian Splatting [23.905254854888863]
We introduce 3D Gaussians for event-based novel view synthesis.<n>Our method reconstructs large and unbounded scenes with high visual quality.<n>We contribute the first real and synthetic event datasets tailored for this setting.
arXiv Detail & Related papers (2025-02-15T15:04:10Z) - EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [72.60992807941885]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.<n>We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.<n>We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - EvaGaussians: Event Stream Assisted Gaussian Splatting from Blurry Images [36.91327728871551]
3D Gaussian Splatting (3D-GS) has demonstrated exceptional capabilities in 3D scene reconstruction and novel view synthesis.
We introduce Event Stream Assisted Gaussian Splatting (EvaGaussians), a novel approach that integrates event streams captured by an event camera to assist in reconstructing high-quality 3D-GS from blurry images.
arXiv Detail & Related papers (2024-05-29T04:59:27Z) - 3D Pose Estimation of Two Interacting Hands from a Monocular Event
Camera [59.846927201816776]
This paper introduces the first framework for 3D tracking of two fast-moving and interacting hands from a single monocular event camera.
Our approach tackles the left-right hand ambiguity with a novel semi-supervised feature-wise attention mechanism and integrates an intersection loss to fix hand collisions.
arXiv Detail & Related papers (2023-12-21T18:59:57Z) - Event-based Continuous Color Video Decompression from Single Frames [36.4263932473053]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image and an event camera stream.
Our approach combines continuous long-range motion modeling with a neural synthesis model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - 3DDesigner: Towards Photorealistic 3D Object Generation and Editing with
Text-guided Diffusion Models [71.25937799010407]
We equip text-guided diffusion models to achieve 3D-consistent generation.
We study 3D local editing and propose a two-step solution.
We extend our model to perform one-shot novel view synthesis.
arXiv Detail & Related papers (2022-11-25T13:50:00Z) - EventNeRF: Neural Radiance Fields from a Single Colour Event Camera [81.19234142730326]
This paper proposes the first approach for 3D-consistent, dense and novel view synthesis using just a single colour event stream as input.
At its core is a neural radiance field trained entirely in a self-supervised manner from events while preserving the original resolution of the colour event channels.
We evaluate our method qualitatively and numerically on several challenging synthetic and real scenes and show that it produces significantly denser and more visually appealing renderings.
arXiv Detail & Related papers (2022-06-23T17:59:53Z) - Differentiable Event Stream Simulator for Non-Rigid 3D Tracking [82.56690776283428]
Our differentiable simulator enables non-rigid 3D tracking of deformable objects from event streams.
We show the effectiveness of our approach for various types of non-rigid objects and compare to existing methods for non-rigid 3D tracking.
arXiv Detail & Related papers (2021-04-30T17:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.