Ultra Fast Structure-aware Deep Lane Detection
- URL: http://arxiv.org/abs/2004.11757v4
- Date: Wed, 5 Aug 2020 02:59:50 GMT
- Title: Ultra Fast Structure-aware Deep Lane Detection
- Authors: Zequn Qin, Huanyu Wang, and Xi Li
- Abstract summary: We propose a novel, simple, yet effective formulation aiming at extremely fast speed and challenging scenarios.
We treat the process of lane detection as a row-based selecting problem using global features.
Our method could achieve the state-of-the-art performance in terms of both speed and accuracy.
- Score: 15.738757958826998
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern methods mainly regard lane detection as a problem of pixel-wise
segmentation, which is struggling to address the problem of challenging
scenarios and speed. Inspired by human perception, the recognition of lanes
under severe occlusion and extreme lighting conditions is mainly based on
contextual and global information. Motivated by this observation, we propose a
novel, simple, yet effective formulation aiming at extremely fast speed and
challenging scenarios. Specifically, we treat the process of lane detection as
a row-based selecting problem using global features. With the help of row-based
selecting, our formulation could significantly reduce the computational cost.
Using a large receptive field on global features, we could also handle the
challenging scenarios. Moreover, based on the formulation, we also propose a
structural loss to explicitly model the structure of lanes. Extensive
experiments on two lane detection benchmark datasets show that our method could
achieve the state-of-the-art performance in terms of both speed and accuracy. A
light-weight version could even achieve 300+ frames per second with the same
resolution, which is at least 4x faster than previous state-of-the-art methods.
Our code will be made publicly available.
Related papers
- Sketch and Refine: Towards Fast and Accurate Lane Detection [69.63287721343907]
Lane detection is a challenging task due to the complexity of real-world scenarios.
Existing approaches, whether proposal-based or keypoint-based, suffer from depicting lanes effectively and efficiently.
We present a "Sketch-and-Refine" paradigm that utilizes the merits of both keypoint-based and proposal-based methods.
Experiments show that our SRLane can run at a fast speed (i.e., 278 FPS) while yielding an F1 score of 78.9%.
arXiv Detail & Related papers (2024-01-26T09:28:14Z) - Global Context Aggregation Network for Lightweight Saliency Detection of
Surface Defects [70.48554424894728]
We develop a Global Context Aggregation Network (GCANet) for lightweight saliency detection of surface defects on the encoder-decoder structure.
First, we introduce a novel transformer encoder on the top layer of the lightweight backbone, which captures global context information through a novel Depth-wise Self-Attention (DSA) module.
The experimental results on three public defect datasets demonstrate that the proposed network achieves a better trade-off between accuracy and running efficiency compared with other 17 state-of-the-art methods.
arXiv Detail & Related papers (2023-09-22T06:19:11Z) - Factorized Inverse Path Tracing for Efficient and Accurate
Material-Lighting Estimation [97.0195314255101]
Inverse path tracing is expensive to compute, and ambiguities exist between reflection and emission.
Our Factorized Inverse Path Tracing (FIPT) addresses these challenges by using a factored light transport formulation.
Our algorithm enables accurate material and lighting optimization faster than previous work, and is more effective at resolving ambiguities.
arXiv Detail & Related papers (2023-04-12T07:46:05Z) - Ultra Fast Deep Lane Detection with Hybrid Anchor Driven Ordinal
Classification [13.735482211178931]
We propose a novel, simple, yet effective formulation aiming at ultra fast speed and the problem of challenging scenarios.
Specifically, we treat the process of lane detection as an anchor-driven ordinal classification problem using global features.
Our method could achieve state-of-the-art performance in terms of both speed and accuracy.
arXiv Detail & Related papers (2022-06-15T08:53:02Z) - SwiftLane: Towards Fast and Efficient Lane Detection [0.8972186395640678]
We propose SwiftLane: a light-weight, end-to-end deep learning based framework, coupled with the row-wise classification formulation for fast and efficient lane detection.
Our method achieves an inference speed of 411 frames per second, surpassing state-of-the-art in terms of speed while achieving comparable results in terms of accuracy on the popular CULane benchmark dataset.
arXiv Detail & Related papers (2021-10-22T13:35:05Z) - Learning to Estimate Hidden Motions with Global Motion Aggregation [71.12650817490318]
Occlusions pose a significant challenge to optical flow algorithms that rely on local evidences.
We introduce a global motion aggregation module to find long-range dependencies between pixels in the first image.
We demonstrate that the optical flow estimates in the occluded regions can be significantly improved without damaging the performance in non-occluded regions.
arXiv Detail & Related papers (2021-04-06T10:32:03Z) - Robust Lane Detection via Expanded Self Attention [3.616997653625528]
We propose Expanded Self Attention (ESA) module for lane detection.
The proposed method predicts the confidence of a lane along the vertical and horizontal directions in an image.
We achieve state-of-the-art performance in CULane and BDD100K and distinct improvement on TuSimple dataset.
arXiv Detail & Related papers (2021-02-14T00:29:55Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z) - Map-Enhanced Ego-Lane Detection in the Missing Feature Scenarios [26.016292792373815]
This paper exploits prior knowledge contained in digital maps, which has a strong capability to enhance the performance of detection algorithms.
In this way, only a few lane features are needed to eliminate the position error between the road shape and the real lane.
Experiments show that the proposed method can be applied to various scenarios and can run in real-time at a frequency of 20 Hz.
arXiv Detail & Related papers (2020-04-02T16:06:48Z) - The Simulator: Understanding Adaptive Sampling in the
Moderate-Confidence Regime [52.38455827779212]
We propose a novel technique for analyzing adaptive sampling called the em Simulator.
We prove the first instance-based lower bounds the top-k problem which incorporate the appropriate log-factors.
Our new analysis inspires a simple and near-optimal for the best-arm and top-k identification, the first em practical of its kind for the latter problem.
arXiv Detail & Related papers (2017-02-16T23:42:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.