3D Lane Detection from Front or Surround-View using Joint-Modeling & Matching
- URL: http://arxiv.org/abs/2401.08036v2
- Date: Tue, 28 May 2024 05:41:48 GMT
- Title: 3D Lane Detection from Front or Surround-View using Joint-Modeling & Matching
- Authors: Haibin Zhou, Huabing Zhou, Jun Chang, Tao Lu, Jiayi Ma,
- Abstract summary: We propose a joint modeling approach that combines Bezier curves and methods.
We also introduce a novel 3D Spatial, representing an exploration of 3D surround-view lane detection research.
This innovative method establishes a new benchmark in front-view 3D lane detection on the Openlane dataset.
- Score: 27.588395086563978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D lanes offer a more comprehensive understanding of the road surface geometry than 2D lanes, thereby providing crucial references for driving decisions and trajectory planning. While many efforts aim to improve prediction accuracy, we recognize that an efficient network can bring results closer to lane modeling. However, if the modeling data is imprecise, the results might not accurately capture the real-world scenario. Therefore, accurate lane modeling is essential to align prediction results closely with the environment. This study centers on efficient and accurate lane modeling, proposing a joint modeling approach that combines Bezier curves and interpolation methods. Furthermore, based on this lane modeling approach, we developed a Global2Local Lane Matching method with Bezier Control-Point and Key-Point, which serve as a comprehensive solution that leverages hierarchical features with two mathematical models to ensure a precise match. We also introduce a novel 3D Spatial Encoder, representing an exploration of 3D surround-view lane detection research. The framework is suitable for front-view or surround-view 3D lane detection. By directly outputting the key points of lanes in 3D space, it overcomes the limitations of anchor-based methods, enabling accurate prediction of closed-loop or U-shaped lanes and effective adaptation to complex road conditions. This innovative method establishes a new benchmark in front-view 3D lane detection on the Openlane dataset and achieves competitive performance in surround-view 2D lane detection on the Argoverse2 dataset.
Related papers
- LaneCPP: Continuous 3D Lane Detection using Physical Priors [45.52331418900137]
Lane CPP uses a continuous 3D lane detection model leveraging physical prior knowledge about the lane structure and road geometry.
We show the benefits of our contributions and prove the meaningfulness of using priors to make 3D lane detection more robust.
arXiv Detail & Related papers (2024-06-12T16:31:06Z) - Decoupling the Curve Modeling and Pavement Regression for Lane Detection [67.22629246312283]
curve-based lane representation is a popular approach in many lane detection methods.
We propose a new approach to the lane detection task by decomposing it into two parts: curve modeling and ground height regression.
arXiv Detail & Related papers (2023-09-19T11:24:14Z) - AdaptiveShape: Solving Shape Variability for 3D Object Detection with
Geometry Aware Anchor Distributions [1.3807918535446089]
3D object detection with point clouds and images plays an important role in perception tasks such as autonomous driving.
Current methods show great performance on detection and pose estimation of standard-shaped vehicles but lack behind on more complex shapes.
This work introduces several new methods to improve and measure the performance for such classes.
arXiv Detail & Related papers (2023-02-28T12:31:31Z) - ONCE-3DLanes: Building Monocular 3D Lane Detection [41.46466150783367]
We present ONCE-3DLanes, a real-world autonomous driving dataset with lane layout annotation in 3D space.
By exploiting the explicit relationship between point clouds and image pixels, a dataset annotation pipeline is designed to automatically generate high-quality 3D lane locations.
arXiv Detail & Related papers (2022-04-30T16:35:25Z) - PersFormer: 3D Lane Detection via Perspective Transformer and the
OpenLane Benchmark [109.03773439461615]
PersFormer is an end-to-end monocular 3D lane detector with a novel Transformer-based spatial feature transformation module.
We release one of the first large-scale real-world 3D lane datasets, called OpenLane, with high-quality annotation and scenario diversity.
arXiv Detail & Related papers (2022-03-21T16:12:53Z) - Cylindrical and Asymmetrical 3D Convolution Networks for LiDAR-based
Perception [122.53774221136193]
State-of-the-art methods for driving-scene LiDAR-based perception often project the point clouds to 2D space and then process them via 2D convolution.
A natural remedy is to utilize the 3D voxelization and 3D convolution network.
We propose a new framework for the outdoor LiDAR segmentation, where cylindrical partition and asymmetrical 3D convolution networks are designed to explore the 3D geometric pattern.
arXiv Detail & Related papers (2021-09-12T06:25:11Z) - PerMO: Perceiving More at Once from a Single Image for Autonomous
Driving [76.35684439949094]
We present a novel approach to detect, segment, and reconstruct complete textured 3D models of vehicles from a single image.
Our approach combines the strengths of deep learning and the elegance of traditional techniques.
We have integrated these algorithms with an autonomous driving system.
arXiv Detail & Related papers (2020-07-16T05:02:45Z) - DOPS: Learning to Detect 3D Objects and Predict their 3D Shapes [54.239416488865565]
We propose a fast single-stage 3D object detection method for LIDAR data.
The core novelty of our method is a fast, single-pass architecture that both detects objects in 3D and estimates their shapes.
We find that our proposed method achieves state-of-the-art results by 5% on object detection in ScanNet scenes, and it gets top results by 3.4% in the Open dataset.
arXiv Detail & Related papers (2020-04-02T17:48:50Z) - Semi-Local 3D Lane Detection and Uncertainty Estimation [6.296104145657063]
Our method is based on a semi-local, BEV, tile representation that breaks down lanes into simple lane segments.
It combines learning a parametric model for the segments along with a deep feature embedding that is then used to cluster segment together into full lanes.
Our method is the first to output a learning based uncertainty estimation for the lane detection task.
arXiv Detail & Related papers (2020-03-11T12:35:24Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.