SpikeGS: Reconstruct 3D scene via fast-moving bio-inspired sensors
- URL: http://arxiv.org/abs/2407.03771v2
- Date: Mon, 26 Aug 2024 16:15:57 GMT
- Title: SpikeGS: Reconstruct 3D scene via fast-moving bio-inspired sensors
- Authors: Yijia Guo, Liwen Hu, Lei Ma, Tiejun Huang,
- Abstract summary: Spike Gausian Splatting (SpikeGS) is a framework that integrates spike streams into 3DGS pipeline to reconstruct 3D scenes via a fast-moving bio-inspired camera.
SpikeGS extracts detailed geometry and texture from high temporal resolution but texture lacking spike stream, reconstructs 3D scenes captured in 1 second.
- Score: 28.68263688378836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) demonstrates unparalleled superior performance in 3D scene reconstruction. However, 3DGS heavily relies on the sharp images. Fulfilling this requirement can be challenging in real-world scenarios especially when the camera moves fast, which severely limits the application of 3DGS. To address these challenges, we proposed Spike Gausian Splatting (SpikeGS), the first framework that integrates the spike streams into 3DGS pipeline to reconstruct 3D scenes via a fast-moving bio-inspired camera. With accumulation rasterization, interval supervision, and a specially designed pipeline, SpikeGS extracts detailed geometry and texture from high temporal resolution but texture lacking spike stream, reconstructs 3D scenes captured in 1 second. Extensive experiments on multiple synthetic and real-world datasets demonstrate the superiority of SpikeGS compared with existing spike-based and deblur 3D scene reconstruction methods. Codes and data will be released soon.
Related papers
- 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.
3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.
Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - SCIGS: 3D Gaussians Splatting from a Snapshot Compressive Image [11.391665055835249]
Snapshot Compressive Imaging (SCI) offers a possibility for capturing information in high-speed dynamic scenes.
Current deep learning-based reconstruction methods struggle to maintain 3D structural consistency within scenes.
We propose SCIGS, a variant of 3DGS, and develop a primitive-level transformation network that utilizes camera pose stamps and Gaussian primitive coordinates as embedding.
arXiv Detail & Related papers (2024-11-19T12:52:37Z) - Beyond Gaussians: Fast and High-Fidelity 3D Splatting with Linear Kernels [51.08794269211701]
We introduce 3D Linear Splatting (3DLS), which replaces Gaussian kernels with linear kernels to achieve sharper and more precise results.
3DLS demonstrates state-of-the-art fidelity and accuracy, along with a 30% FPS improvement over baseline 3DGS.
arXiv Detail & Related papers (2024-11-19T11:59:54Z) - 3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors [13.191199172286508]
Novel-view synthesis aims to generate novel views of a scene from multiple input images or videos.
3DGS-Enhancer is a novel pipeline for enhancing the representation quality of 3DGS representations.
arXiv Detail & Related papers (2024-10-21T17:59:09Z) - Elite-EvGS: Learning Event-based 3D Gaussian Splatting by Distilling Event-to-Video Priors [8.93657924734248]
Event cameras are bio-inspired sensors that output asynchronous and sparse event streams, instead of fixed frames.
We propose a novel event-based 3DGS framework, named Elite-EvGS.
Our key idea is to distill the prior knowledge from the off-the-shelf event-to-video (E2V) models to effectively reconstruct 3D scenes from events.
arXiv Detail & Related papers (2024-09-20T10:47:52Z) - SpikeGS: 3D Gaussian Splatting from Spike Streams with High-Speed Camera Motion [46.23575738669567]
Novel View Synthesis plays a crucial role by generating new 2D renderings from multi-view images of 3D scenes.
High-frame-rate dense 3D reconstruction emerges as a vital technique, enabling detailed and accurate modeling of real-world objects or scenes.
Spike cameras, a novel type of neuromorphic sensor, continuously record scenes with an ultra-high temporal resolution.
arXiv Detail & Related papers (2024-07-14T03:19:30Z) - WildGaussians: 3D Gaussian Splatting in the Wild [80.5209105383932]
We introduce WildGaussians, a novel approach to handle occlusions and appearance changes with 3DGS.
We demonstrate that WildGaussians matches the real-time rendering speed of 3DGS while surpassing both 3DGS and NeRF baselines in handling in-the-wild data.
arXiv Detail & Related papers (2024-07-11T12:41:32Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - SAGD: Boundary-Enhanced Segment Anything in 3D Gaussian via Gaussian Decomposition [66.80822249039235]
3D Gaussian Splatting has emerged as an alternative 3D representation for novel view synthesis.
We propose SAGD, a conceptually simple yet effective boundary-enhanced segmentation pipeline for 3D-GS.
Our approach achieves high-quality 3D segmentation without rough boundary issues, which can be easily applied to other scene editing tasks.
arXiv Detail & Related papers (2024-01-31T14:19:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.