The 1st-place Solution for CVPR 2023 OpenLane Topology in Autonomous
Driving Challenge
- URL: http://arxiv.org/abs/2306.09590v1
- Date: Fri, 16 Jun 2023 02:33:12 GMT
- Title: The 1st-place Solution for CVPR 2023 OpenLane Topology in Autonomous
Driving Challenge
- Authors: Dongming Wu, Fan Jia, Jiahao Chang, Zhuoling Li, Jianjian Sun, Chunrui
Han, Shuailin Li, Yingfei Liu, Zheng Ge, Tiancai Wang
- Abstract summary: We present the 1st-place solution of OpenLane Topology in Autonomous Driving Challenge.
Considering that topology reasoning is based on centerline detection and traffic element detection, we develop a multi-stage framework for high performance.
- Score: 9.684012701676327
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present the 1st-place solution of OpenLane Topology in Autonomous Driving
Challenge. Considering that topology reasoning is based on centerline detection
and traffic element detection, we develop a multi-stage framework for high
performance. Specifically, the centerline is detected by the powerful PETRv2
detector and the popular YOLOv8 is employed to detect the traffic elements.
Further, we design a simple yet effective MLP-based head for topology
prediction. Our method achieves 55\% OLS on the OpenLaneV2 test set, surpassing
the 2nd solution by 8 points.
Related papers
- Monocular Lane Detection Based on Deep Learning: A Survey [51.19079381823076]
Lane detection plays an important role in autonomous driving perception systems.
As deep learning algorithms gain popularity, monocular lane detection methods based on deep learning have demonstrated superior performance.
This paper presents a comprehensive overview of existing methods, encompassing both the increasingly mature 2D lane detection approaches and the developing 3D lane detection works.
arXiv Detail & Related papers (2024-11-25T12:09:43Z) - First Place Solution to the ECCV 2024 ROAD++ Challenge @ ROAD++ Spatiotemporal Agent Detection 2024 [12.952512012601874]
The task of Track 1 is agent detection, which aims to construct an "agent tube" for agents in consecutive video frames.
Our solutions focus on the challenges in this task including extreme-size objects, low-light, imbalance and fine-grained classification.
We rank first in the test set of Track 1 for the ROAD++ Challenge 2024, and achieve 30.82% average video-mAP.
arXiv Detail & Related papers (2024-10-30T14:52:43Z) - ENet-21: An Optimized light CNN Structure for Lane Detection [1.4542411354617986]
This study develops an optimal structure for the lane detection problem.
It offers a promising solution for driver assistance features in modern vehicles.
Experiments on the TuSimple dataset support the effectiveness of the proposed method.
arXiv Detail & Related papers (2024-03-28T19:07:26Z) - TopoMLP: A Simple yet Strong Pipeline for Driving Topology Reasoning [51.29906807247014]
Topology reasoning aims to understand road scenes and present drivable routes in autonomous driving.
It requires detecting road centerlines (lane) and traffic elements, further reasoning their topology relationship, i.e., lane-lane topology, and lane-traffic topology.
We introduce a powerful 3D lane detector and an improved 2D traffic element detector to extend the upper limit of topology performance.
arXiv Detail & Related papers (2023-10-10T16:24:51Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Efficient Ground Vehicle Path Following in Game AI [77.34726150561087]
This paper presents an efficient path following solution for ground vehicles tailored to game AI.
The proposed path follower is evaluated through a variety of test scenarios in a first-person shooter game.
We achieved a 70% decrease in the total number of stuck events compared to an existing path following solution.
arXiv Detail & Related papers (2023-07-07T04:20:07Z) - Oriented R-CNN for Object Detection [61.78746189807462]
This work proposes an effective and simple oriented object detection framework, termed Oriented R-CNN.
In the first stage, we propose an oriented Region Proposal Network (oriented RPN) that directly generates high-quality oriented proposals in a nearly cost-free manner.
The second stage is oriented R-CNN head for refining oriented Regions of Interest (oriented RoIs) and recognizing them.
arXiv Detail & Related papers (2021-08-12T12:47:43Z) - Workshop on Autonomous Driving at CVPR 2021: Technical Report for
Streaming Perception Challenge [57.647371468876116]
We introduce our real-time 2D object detection system for the realistic autonomous driving scenario.
Our detector is built on a newly designed YOLO model, called YOLOX.
On the Argoverse-HD dataset, our system achieves 41.0 streaming AP, which surpassed second place by 7.8/6.1 on detection-only track/fully track, respectively.
arXiv Detail & Related papers (2021-07-27T06:36:06Z) - Heatmap-based Vanishing Point boosts Lane Detection [3.8170259685864165]
We propose a new multi-task fusion network architecture for high-precision lane detection.
The proposed fusion strategy was tested using the public CULane dataset.
The experimental results suggest that the lane detection accuracy of our method outperforms those of state-of-the-art (SOTA) methods.
arXiv Detail & Related papers (2020-07-30T17:17:00Z) - Multi-lane Detection Using Instance Segmentation and Attentive Voting [0.0]
We propose a novel solution to multi-lane detection, which outperforms state of the art methods in terms of both accuracy and speed.
We are able to obtain a lane segmentation accuracy of 99.87% running at 54.53 fps (average)
arXiv Detail & Related papers (2020-01-01T16:48:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.