Implicit 3D scene reconstruction using deep learning towards efficient collision understanding in autonomous driving
- URL: http://arxiv.org/abs/2506.15806v1
- Date: Wed, 18 Jun 2025 18:42:04 GMT
- Title: Implicit 3D scene reconstruction using deep learning towards efficient collision understanding in autonomous driving
- Authors: Akarshani Ramanayake, Nihal Kodikara,
- Abstract summary: This study develops a learning-based 3D scene reconstruction methodology that leverages LiDAR data and a deep neural network to build the static Signed Distance Function (SDF) maps.<n>Our preliminary results demonstrate that this method would significantly enhance collision detection performance, particularly in congested and dynamic environments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In crowded urban environments where traffic is dense, current technologies struggle to oversee tight navigation, but surface-level understanding allows autonomous vehicles to safely assess proximity to surrounding obstacles. 3D or 2D scene mapping of the surrounding objects is an essential task in addressing the above problem. Despite its importance in dense vehicle traffic conditions, 3D scene reconstruction of object shapes with higher boundary level accuracy is not yet entirely considered in current literature. The sign distance function represents any shape through parameters that calculate the distance from any point in space to the closest obstacle surface, making it more efficient in terms of storage. In recent studies, researchers have started to formulate problems with Implicit 3D reconstruction methods in the autonomous driving domain, highlighting the possibility of using sign distance function to map obstacles effectively. This research addresses this gap by developing a learning-based 3D scene reconstruction methodology that leverages LiDAR data and a deep neural network to build a the static Signed Distance Function (SDF) maps. Unlike traditional polygonal representations, this approach has the potential to map 3D obstacle shapes with more boundary-level details. Our preliminary results demonstrate that this method would significantly enhance collision detection performance, particularly in congested and dynamic environments.
Related papers
- Move to Understand a 3D Scene: Bridging Visual Grounding and Exploration for Efficient and Versatile Embodied Navigation [54.04601077224252]
Embodied scene understanding requires not only comprehending visual-spatial information but also determining where to explore next in the 3D physical world.<n>underlinetextbf3D vision-language learning enables embodied agents to effectively explore and understand their environment.<n>model's versatility enables navigation using diverse input modalities, including categories, language descriptions, and reference images.
arXiv Detail & Related papers (2025-07-05T14:15:52Z) - GeoDrive: 3D Geometry-Informed Driving World Model with Precise Action Control [50.67481583744243]
We introduce GeoDrive, which explicitly integrates robust 3D geometry conditions into driving world models.<n>We propose a dynamic editing module during training to enhance the renderings by editing the positions of the vehicles.<n>Our method significantly outperforms existing models in both action accuracy and 3D spatial awareness.
arXiv Detail & Related papers (2025-05-28T14:46:51Z) - 3DTTNet: Multimodal Fusion-Based 3D Traversable Terrain Modeling for Off-Road Environments [10.521569910467072]
Off-road environments are significant challenges for autonomous ground vehicles.<n>In this paper, traversable area recognition is achieved through semantic scene completion.<n>A novel multimodal method, 3DTTNet, is proposed to generate dense traversable terrain estimations.
arXiv Detail & Related papers (2024-12-11T08:36:36Z) - Street Gaussians without 3D Object Tracker [86.62329193275916]
Existing methods rely on labor-intensive manual labeling of object poses to reconstruct dynamic objects in canonical space.<n>We propose a stable object tracking module by leveraging associations from 2D deep trackers within a 3D object fusion strategy.<n>We address inevitable tracking errors by further introducing a motion learning strategy in an implicit feature space that autonomously corrects trajectory errors and recovers missed detections.
arXiv Detail & Related papers (2024-12-07T05:49:42Z) - Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - HeightFormer: A Semantic Alignment Monocular 3D Object Detection Method from Roadside Perspective [11.841338298700421]
We propose a novel 3D object detection framework integrating Spatial Former and Voxel Pooling Former to enhance 2D-to-3D projection based on height estimation.
Experiments were conducted using the Rope3D and DAIR-V2X-I dataset, and the results demonstrated the outperformance of the proposed algorithm in the detection of both vehicles and cyclists.
arXiv Detail & Related papers (2024-10-10T09:37:33Z) - LaneCPP: Continuous 3D Lane Detection using Physical Priors [45.52331418900137]
Lane CPP uses a continuous 3D lane detection model leveraging physical prior knowledge about the lane structure and road geometry.
We show the benefits of our contributions and prove the meaningfulness of using priors to make 3D lane detection more robust.
arXiv Detail & Related papers (2024-06-12T16:31:06Z) - Real-time 3D semantic occupancy prediction for autonomous vehicles using memory-efficient sparse convolution [4.204990010424084]
In autonomous vehicles, understanding the surrounding 3D environment of the ego vehicle in real-time is essential.
State of the art 3D mapping methods leverage transformers with cross-attention mechanisms to elevate 2D vision-centric camera features into the 3D domain.
This paper introduces an approach that extracts features from front-view 2D camera images and LiDAR scans, then employs a sparse convolution network (Minkowski Engine) for 3D semantic occupancy prediction.
arXiv Detail & Related papers (2024-03-13T17:50:59Z) - 3D Data Augmentation for Driving Scenes on Camera [50.41413053812315]
We propose a 3D data augmentation approach termed Drive-3DAug, aiming at augmenting the driving scenes on camera in the 3D space.
We first utilize Neural Radiance Field (NeRF) to reconstruct the 3D models of background and foreground objects.
Then, augmented driving scenes can be obtained by placing the 3D objects with adapted location and orientation at the pre-defined valid region of backgrounds.
arXiv Detail & Related papers (2023-03-18T05:51:05Z) - ONCE-3DLanes: Building Monocular 3D Lane Detection [41.46466150783367]
We present ONCE-3DLanes, a real-world autonomous driving dataset with lane layout annotation in 3D space.
By exploiting the explicit relationship between point clouds and image pixels, a dataset annotation pipeline is designed to automatically generate high-quality 3D lane locations.
arXiv Detail & Related papers (2022-04-30T16:35:25Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.