LiDAR Based Semantic Perception for Forklifts in Outdoor Environments
- URL: http://arxiv.org/abs/2505.22258v1
- Date: Wed, 28 May 2025 11:45:14 GMT
- Title: LiDAR Based Semantic Perception for Forklifts in Outdoor Environments
- Authors: Benjamin Serfling, Hannes Reichert, Lorenzo Bayerlein, Konrad Doll, Kati Radkhah-Lens,
- Abstract summary: We present a novel LiDAR-based semantic segmentation framework tailored for autonomous forklifts operating in complex outdoor environments.<n>Central to our approach is the integration of a dual LiDAR system, which combines forward-facing and downward-angled LiDAR sensors.<n>Using high-resolution 3D point clouds captured from two sensors, our method employs a lightweight yet robust approach that segments the point clouds into safety-critical instance classes.
- Score: 0.31457219084519
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this study, we present a novel LiDAR-based semantic segmentation framework tailored for autonomous forklifts operating in complex outdoor environments. Central to our approach is the integration of a dual LiDAR system, which combines forward-facing and downward-angled LiDAR sensors to enable comprehensive scene understanding, specifically tailored for industrial material handling tasks. The dual configuration improves the detection and segmentation of dynamic and static obstacles with high spatial precision. Using high-resolution 3D point clouds captured from two sensors, our method employs a lightweight yet robust approach that segments the point clouds into safety-critical instance classes such as pedestrians, vehicles, and forklifts, as well as environmental classes such as driveable ground, lanes, and buildings. Experimental validation demonstrates that our approach achieves high segmentation accuracy while satisfying strict runtime requirements, establishing its viability for safety-aware, fully autonomous forklift navigation in dynamic warehouse and yard environments.
Related papers
- WLTCL: Wide Field-of-View 3-D LiDAR Truck Compartment Automatic Localization System [9.07574138083974]
We propose an innovative wide field-of-view 3-D LiDAR vehicle compartment automatic localization system.<n>For vehicles of various sizes, this system leverages the LiDAR to generate high-density point clouds within an extensive field-of-view range.<n>Our compartment key point positioning algorithm utilizes the geometric features of the compartments to accurately locate the corner points.
arXiv Detail & Related papers (2025-04-26T09:35:47Z) - Salient Object Detection in Traffic Scene through the TSOD10K Dataset [22.615252113004402]
Traffic Salient Object Detection (TSOD) aims to segment the objects critical to driving safety by combining semantic (e.g., collision risks) and visual saliency.<n>Our research establishes the first foundation for safety-aware saliency analysis in intelligent transportation systems.
arXiv Detail & Related papers (2025-03-21T07:21:24Z) - Semantic Scene Completion Based 3D Traversability Estimation for Off-Road Terrains [10.521569910467072]
Off-road environments present significant challenges for autonomous ground vehicles.<n>Traditional perception algorithms, designed primarily for structured environments, often fail under these conditions.<n>In this paper, ORDformer is proposed to generate dense traversable occupancy predictions from a forward-facing perspective.
arXiv Detail & Related papers (2024-12-11T08:36:36Z) - Grammarization-Based Grasping with Deep Multi-Autoencoder Latent Space Exploration by Reinforcement Learning Agent [0.0]
We propose a novel framework for robotic grasping based on the idea of compressing high-dimensional target and gripper features in a common latent space.
Our approach simplifies grasping by using three autoencoders dedicated to the target, the gripper, and a third one that fuses their latent representations.
arXiv Detail & Related papers (2024-11-13T12:26:08Z) - Is Your LiDAR Placement Optimized for 3D Scene Understanding? [8.233185931617122]
prevailing driving datasets predominantly utilize single-LiDAR systems and collect data devoid of adverse conditions.
We propose Place3D, a full-cycle pipeline that encompasses LiDAR placement optimization, data generation, and downstream evaluations.
We showcase exceptional results in both LiDAR semantic segmentation and 3D object detection tasks, under diverse weather and sensor failure conditions.
arXiv Detail & Related papers (2024-03-25T17:59:58Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Benchmarking the Robustness of LiDAR Semantic Segmentation Models [78.6597530416523]
In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions.
We propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy.
We design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications.
arXiv Detail & Related papers (2023-01-03T06:47:31Z) - LiDAR-based 4D Panoptic Segmentation via Dynamic Shifting Network [56.71765153629892]
We propose the Dynamic Shifting Network (DS-Net), which serves as an effective panoptic segmentation framework in the point cloud realm.
Our proposed DS-Net achieves superior accuracies over current state-of-the-art methods in both tasks.
We extend DS-Net to 4D panoptic LiDAR segmentation by the temporally unified instance clustering on aligned LiDAR frames.
arXiv Detail & Related papers (2022-03-14T15:25:42Z) - PillarGrid: Deep Learning-based Cooperative Perception for 3D Object
Detection from Onboard-Roadside LiDAR [15.195933965761645]
We propose textitPillarGrid, a novel cooperative perception method fusing information from multiple 3D LiDARs.
PillarGrid consists of four main phases: 1) cooperative preprocessing of point clouds, 2) pillar-wise voxelization and feature extraction, 3) grid-wise deep fusion of features from multiple sensors, and 4) convolutional neural network (CNN)-based augmented 3D object detection.
Extensive experimentation shows that PillarGrid outperforms the SOTA single-LiDAR-based 3D object detection methods with respect to both accuracy and range by a large margin.
arXiv Detail & Related papers (2022-03-12T02:28:41Z) - Transferable Deep Reinforcement Learning Framework for Autonomous
Vehicles with Joint Radar-Data Communications [69.24726496448713]
We propose an intelligent optimization framework based on the Markov Decision Process (MDP) to help the AV make optimal decisions.
We then develop an effective learning algorithm leveraging recent advances of deep reinforcement learning techniques to find the optimal policy for the AV.
We show that the proposed transferable deep reinforcement learning framework reduces the obstacle miss detection probability by the AV up to 67% compared to other conventional deep reinforcement learning approaches.
arXiv Detail & Related papers (2021-05-28T08:45:37Z) - LiDAR-based Panoptic Segmentation via Dynamic Shifting Network [56.71765153629892]
LiDAR-based panoptic segmentation aims to parse both objects and scenes in a unified manner.
We propose the Dynamic Shifting Network (DS-Net), which serves as an effective panoptic segmentation framework in the point cloud realm.
Our proposed DS-Net achieves superior accuracies over current state-of-the-art methods.
arXiv Detail & Related papers (2020-11-24T08:44:46Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.