Pandar128 dataset for lane line detection
- URL: http://arxiv.org/abs/2511.07084v1
- Date: Mon, 10 Nov 2025 13:18:36 GMT
- Title: Pandar128 dataset for lane line detection
- Authors: Filip Beránek, Václav Diviš, Ivan Gruber,
- Abstract summary: Pandar128 is the largest public dataset for lane line detection using a 128-beam LiDAR.<n>It contains over 52,000 camera frames and 34,000 LiDAR scans, captured in diverse real-world conditions in Germany.<n>To complement the dataset, we also introduce SimpleLidarLane, a light-weight baseline method for lane line reconstruction.
- Score: 0.30586855806896035
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Pandar128, the largest public dataset for lane line detection using a 128-beam LiDAR. It contains over 52,000 camera frames and 34,000 LiDAR scans, captured in diverse real-world conditions in Germany. The dataset includes full sensor calibration (intrinsics, extrinsics) and synchronized odometry, supporting tasks such as projection, fusion, and temporal modeling. To complement the dataset, we also introduce SimpleLidarLane, a light-weight baseline method for lane line reconstruction that combines BEV segmentation, clustering, and polyline fitting. Despite its simplicity, our method achieves strong performance under challenging various conditions (e.g., rain, sparse returns), showing that modular pipelines paired with high-quality data and principled evaluation can compete with more complex approaches. Furthermore, to address the lack of standardized evaluation, we propose a novel polyline-based metric - Interpolation-Aware Matching F1 (IAM-F1) - that employs interpolation-aware lateral matching in BEV space. All data and code are publicly released to support reproducibility in LiDAR-based lane detection.
Related papers
- Li-ViP3D++: Query-Gated Deformable Camera-LiDAR Fusion for End-to-End Perception and Trajectory Prediction [0.0]
Li-ViP3D++ is a query-based.<n>attention framework for end-to-end.<n>perception and trajectory prediction from raw sensor data.
arXiv Detail & Related papers (2026-01-28T15:53:32Z) - OpenDataArena: A Fair and Open Arena for Benchmarking Post-Training Dataset Value [74.80873109856563]
OpenDataArena (ODA) is a holistic and open platform designed to benchmark the intrinsic value of post-training data.<n>ODA establishes a comprehensive ecosystem comprising four key pillars: (i) a unified training-evaluation pipeline that ensures fair, open comparisons across diverse models; (ii) a multi-dimensional scoring framework that profiles data quality along tens of distinct axes; and (iii) an interactive data lineage explorer to visualize dataset genealogy and dissect component sources.
arXiv Detail & Related papers (2025-12-16T03:33:24Z) - LAMP: Data-Efficient Linear Affine Weight-Space Models for Parameter-Controlled 3D Shape Generation and Extrapolation [4.182541493191528]
We introduce LAMP, a framework for controllable and interpretable 3D generation.<n>We evaluate LAMP on two 3D parametric geometry benchmarks: DrivAerNet++ and BlendedNet.<n>Our results demonstrate that LAMP advances controllable, data-efficient, and safe 3D generation.
arXiv Detail & Related papers (2025-10-26T02:12:20Z) - Multimodal HD Mapping for Intersections by Intelligent Roadside Units [21.3691460430126]
High-definition (HD) semantic mapping of complex intersections poses significant challenges for vehicle-based approaches.<n>This paper introduces a novel camera-LiDAR fusion framework that leverages elevated intelligent roadside units (IRUs)<n>We present RS-seq, a comprehensive dataset developed through the systematic enhancement and annotation of the V2X-Seq dataset.
arXiv Detail & Related papers (2025-07-11T08:45:56Z) - Mixed Signals: A Diverse Point Cloud Dataset for Heterogeneous LiDAR V2X Collaboration [57.3519952529079]
Vehicle-to-everything (V2X) collaborative perception has emerged as a promising solution to address the limitations of single-vehicle perception systems.<n>To address these gaps, we present Mixed Signals, a comprehensive V2X dataset featuring 45.1k point clouds and 240.6k bounding boxes.<n>Our dataset provides point clouds and bounding box annotations across 10 classes, ensuring reliable data for perception training.
arXiv Detail & Related papers (2025-02-19T23:53:00Z) - A Framework for Fine-Tuning LLMs using Heterogeneous Feedback [69.51729152929413]
We present a framework for fine-tuning large language models (LLMs) using heterogeneous feedback.
First, we combine the heterogeneous feedback data into a single supervision format, compatible with methods like SFT and RLHF.
Next, given this unified feedback dataset, we extract a high-quality and diverse subset to obtain performance increases.
arXiv Detail & Related papers (2024-08-05T23:20:32Z) - XLD: A Cross-Lane Dataset for Benchmarking Novel Driving View Synthesis [84.23233209017192]
This paper presents a synthetic dataset for novel driving view synthesis evaluation.<n>It includes testing images captured by deviating from the training trajectory by $1-4$ meters.<n>We establish the first realistic benchmark for evaluating existing NVS approaches under front-only and multicamera settings.
arXiv Detail & Related papers (2024-06-26T14:00:21Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline [80.13652104204691]
In this paper, we construct a large-scale benchmark with high diversity for visible-thermal UAV tracking (VTUAV)
We provide a coarse-to-fine attribute annotation, where frame-level attributes are provided to exploit the potential of challenge-specific trackers.
In addition, we design a new RGB-T baseline, named Hierarchical Multi-modal Fusion Tracker (HMFT), which fuses RGB-T data in various levels.
arXiv Detail & Related papers (2022-04-08T15:22:33Z) - PSE-Match: A Viewpoint-free Place Recognition Method with Parallel
Semantic Embedding [9.265785042748158]
PSE-Match is a viewpoint-free place recognition method based on parallel semantic analysis of isolated semantic attributes from 3D point-cloud models.
PSE-Match incorporates a divergence place learning network to capture different semantic attributes parallelly through the spherical harmonics domain.
arXiv Detail & Related papers (2021-08-01T22:16:40Z) - MULLS: Versatile LiDAR SLAM via Multi-metric Linear Least Square [4.449835214520727]
MULLS is an efficient, low-drift, and versatile 3D LiDAR SLAM system.
For the front-end, roughly classified feature points are extracted from each frame using dual-threshold ground filtering and principal components analysis.
For the back-end, hierarchical pose graph optimization is conducted among regularly stored history submaps to reduce the drift resulting from dead reckoning.
On the KITTI benchmark, MULLS ranks among the top LiDAR-only SLAM systems with real-time performance.
arXiv Detail & Related papers (2021-02-07T10:42:42Z) - ePointDA: An End-to-End Simulation-to-Real Domain Adaptation Framework
for LiDAR Point Cloud Segmentation [111.56730703473411]
Training deep neural networks (DNNs) on LiDAR data requires large-scale point-wise annotations.
Simulation-to-real domain adaptation (SRDA) trains a DNN using unlimited synthetic data with automatically generated labels.
ePointDA consists of three modules: self-supervised dropout noise rendering, statistics-invariant and spatially-adaptive feature alignment, and transferable segmentation learning.
arXiv Detail & Related papers (2020-09-07T23:46:08Z) - LiDAR guided Small obstacle Segmentation [14.880698940693609]
Small obstacles on the road are critical for autonomous driving.
We present a method to reliably detect such obstacles through a multi-modal framework of sparse LiDAR and Monocular vision.
We show significant performance gains when the context is fed as an additional input to monocular semantic segmentation frameworks.
arXiv Detail & Related papers (2020-03-12T18:34:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.