A Survey on Event-driven 3D Reconstruction: Development under Different Categories
- URL: http://arxiv.org/abs/2503.19753v2
- Date: Wed, 26 Mar 2025 12:34:34 GMT
- Title: A Survey on Event-driven 3D Reconstruction: Development under Different Categories
- Authors: Chuanzhi Xu, Haoxian Zhou, Haodong Chen, Vera Chung, Qiang Qu,
- Abstract summary: Event cameras have gained increasing attention for 3D reconstruction due to their high temporal resolution, low latency, and high dynamic range.<n>We provide a comprehensive review of event-driven 3D reconstruction methods, including stereo, monocular, and multimodal systems.<n>Emerging trends, such as neural radiance fields and 3D Gaussian splatting with event data, are also covered.
- Score: 4.459934616871806
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Event cameras have gained increasing attention for 3D reconstruction due to their high temporal resolution, low latency, and high dynamic range. They capture per-pixel brightness changes asynchronously, allowing accurate reconstruction under fast motion and challenging lighting conditions. In this survey, we provide a comprehensive review of event-driven 3D reconstruction methods, including stereo, monocular, and multimodal systems. We further categorize recent developments based on geometric, learning-based, and hybrid approaches. Emerging trends, such as neural radiance fields and 3D Gaussian splatting with event data, are also covered. The related works are structured chronologically to illustrate the innovations and progression within the field. To support future research, we also highlight key research gaps and future research directions in dataset, experiment, evaluation, event representation, etc.
Related papers
- Advances in Feed-Forward 3D Reconstruction and View Synthesis: A Survey [154.50661618628433]
3D reconstruction and view synthesis are foundational problems in computer vision, graphics, and immersive technologies such as augmented reality (AR), virtual reality (VR), and digital twins.<n>Recent advances in feed-forward approaches, driven by deep learning, have revolutionized this field by enabling fast and generalizable 3D reconstruction and view synthesis.
arXiv Detail & Related papers (2025-07-19T06:13:25Z) - 3D Shape Generation: A Survey [0.6445605125467574]
Recent advances in deep learning have transformed the field of 3D shape generation.<n>This survey organizes the discussion around three core components: shape representations, generative modeling approaches, and evaluation protocols.<n>We identify open challenges and outline future research directions that could drive progress in controllable, efficient, and high-quality 3D shape generation.
arXiv Detail & Related papers (2025-06-27T23:06:06Z) - GTR: Gaussian Splatting Tracking and Reconstruction of Unknown Objects Based on Appearance and Geometric Complexity [49.31257173003408]
We present a novel method for 6-DoF object tracking and high-quality 3D reconstruction from monocular RGBD video.<n>Our approach demonstrates strong capabilities in recovering high-fidelity object meshes, setting a new standard for single-sensor 3D reconstruction in open-world environments.
arXiv Detail & Related papers (2025-05-17T08:46:29Z) - Advances in Radiance Field for Dynamic Scene: From Neural Field to Gaussian Field [85.12359852781216]
This survey presents a systematic analysis of over 200 papers focused on dynamic scene representation using radiance field.<n>We organize diverse methodological approaches under a unified representational framework, concluding with a critical examination of persistent challenges and promising research directions.
arXiv Detail & Related papers (2025-05-15T07:51:08Z) - A Survey of 3D Reconstruction with Event Cameras [16.103940503726022]
Event cameras produce sparse yet temporally dense data streams, enabling robust and accurate 3D reconstruction.<n>These capabilities offer substantial promise for transformative applications across various fields, including autonomous driving, robotics, aerial navigation, and immersive virtual reality.<n>This survey aims to serve as an essential reference and provides a clear and motivating roadmap toward advancing the state of the art in event-driven 3D reconstruction.
arXiv Detail & Related papers (2025-05-13T11:04:04Z) - Dynamic Scene Reconstruction: Recent Advance in Real-time Rendering and Streaming [7.250878248686215]
Representing and rendering dynamic scenes from 2D images is a fundamental yet challenging problem in computer vision and graphics.
This survey provides a comprehensive review of the evolution and advancements in dynamic scene representation and rendering.
We systematically summarize existing approaches, categorize them according to their core principles, compile relevant datasets, compare the performance of various methods on these benchmarks, and explore the challenges and future research directions in this rapidly evolving field.
arXiv Detail & Related papers (2025-03-11T08:29:41Z) - 3D Representation Methods: A Survey [0.0]
3D representation has experienced significant advancements, driven by the increasing demand for high-fidelity 3D models in various applications.
This review examines the development and current state of 3D representation methods, highlighting their research trajectories, innovations, strength and weakness.
arXiv Detail & Related papers (2024-10-09T02:01:05Z) - Event-based Stereo Depth Estimation: A Survey [12.711235562366898]
Stereopsis has widespread appeal in robotics as it is the predominant way by which living beings perceive depth to navigate our 3D world.
Event cameras are novel bio-inspired sensors that detect per-pixel brightness changes asynchronously, with very high temporal resolution and high dynamic range.
The high temporal precision also benefits stereo matching, making disparity (depth) estimation a popular research area for event cameras ever since its inception.
arXiv Detail & Related papers (2024-09-26T09:43:50Z) - Evaluating Modern Approaches in 3D Scene Reconstruction: NeRF vs Gaussian-Based Methods [4.6836510920448715]
This study explores the capabilities of Neural Radiance Fields (NeRF) and Gaussian-based methods in the context of 3D scene reconstruction.
We assess performance based on tracking accuracy, mapping fidelity, and view synthesis.
Findings reveal that NeRF excels in view synthesis, offering unique capabilities in generating new perspectives from existing data.
arXiv Detail & Related papers (2024-08-08T07:11:57Z) - Gaussian Splatting: 3D Reconstruction and Novel View Synthesis, a Review [0.08823202672546056]
This review paper focuses on state-of-the-art techniques for 3D reconstruction, including the generation of novel, unseen views.
An overview of recent developments in the Gaussian Splatting method is provided, covering input types, model structures, output representations, and training strategies.
arXiv Detail & Related papers (2024-05-06T12:32:38Z) - OV-Uni3DETR: Towards Unified Open-Vocabulary 3D Object Detection via Cycle-Modality Propagation [67.56268991234371]
OV-Uni3DETR achieves the state-of-the-art performance on various scenarios, surpassing existing methods by more than 6% on average.
Code and pre-trained models will be released later.
arXiv Detail & Related papers (2024-03-28T17:05:04Z) - Advances in 3D Generation: A Survey [54.95024616672868]
The field of 3D content generation is developing rapidly, enabling the creation of increasingly high-quality and diverse 3D models.
Specifically, we introduce the 3D representations that serve as the backbone for 3D generation.
We provide a comprehensive overview of the rapidly growing literature on generation methods, categorized by the type of algorithmic paradigms.
arXiv Detail & Related papers (2024-01-31T13:06:48Z) - A Survey on 3D Gaussian Splatting [51.96747208581275]
3D Gaussian splatting (GS) has emerged as a transformative technique in explicit radiance field and computer graphics.<n>We provide the first systematic overview of the recent developments and critical contributions in the domain of 3D GS.<n>By enabling unprecedented rendering speed, 3D GS opens up a plethora of applications, ranging from virtual reality to interactive media and beyond.
arXiv Detail & Related papers (2024-01-08T13:42:59Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Deep Learning for Event-based Vision: A Comprehensive Survey and Benchmarks [55.81577205593956]
Event cameras are bio-inspired sensors that capture the per-pixel intensity changes asynchronously.
Deep learning (DL) has been brought to this emerging field and inspired active research endeavors in mining its potential.
arXiv Detail & Related papers (2023-02-17T14:19:28Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.