LOID: Lane Occlusion Inpainting and Detection for Enhanced Autonomous Driving Systems
- URL: http://arxiv.org/abs/2408.09117v1
- Date: Sat, 17 Aug 2024 06:55:40 GMT
- Title: LOID: Lane Occlusion Inpainting and Detection for Enhanced Autonomous Driving Systems
- Authors: Aayush Agrawal, Ashmitha Jaysi Sivakumar, Ibrahim Kaif, Chayan Banerjee,
- Abstract summary: We propose two innovative approaches to enhance lane detection in challenging environments.
The first approach aug-Segment improves conventional lane detection models by augmenting the training dataset of CULanes.
The second approach, LOID Lane Occlusion Inpainting and Detection, uses inpainting models to reconstruct the road environment in the occluded areas.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate lane detection is essential for effective path planning and lane following in autonomous driving, especially in scenarios with significant occlusion from vehicles and pedestrians. Existing models often struggle under such conditions, leading to unreliable navigation and safety risks. We propose two innovative approaches to enhance lane detection in these challenging environments, each showing notable improvements over current methods. The first approach aug-Segment improves conventional lane detection models by augmenting the training dataset of CULanes with simulated occlusions and training a segmentation model. This method achieves a 12% improvement over a number of SOTA models on the CULanes dataset, demonstrating that enriched training data can better handle occlusions, however, since this model lacked robustness to certain settings, our main contribution is the second approach, LOID Lane Occlusion Inpainting and Detection. LOID introduces an advanced lane detection network that uses an image processing pipeline to identify and mask occlusions. It then employs inpainting models to reconstruct the road environment in the occluded areas. The enhanced image is processed by a lane detection algorithm, resulting in a 20% & 24% improvement over several SOTA models on the BDDK100 and CULanes datasets respectively, highlighting the effectiveness of this novel technique.
Related papers
- Homography Guided Temporal Fusion for Road Line and Marking Segmentation [73.47092021519245]
Road lines and markings are frequently occluded in the presence of moving vehicles, shadow, and glare.
We propose a Homography Guided Fusion (HomoFusion) module to exploit temporally-adjacent video frames for complementary cues.
We show that exploiting available camera intrinsic data and ground plane assumption for cross-frame correspondence can lead to a light-weight network with significantly improved performances in speed and accuracy.
arXiv Detail & Related papers (2024-04-11T10:26:40Z) - SGD: Street View Synthesis with Gaussian Splatting and Diffusion Prior [53.52396082006044]
Current methods struggle to maintain rendering quality at the viewpoint that deviates significantly from the training viewpoints.
This issue stems from the sparse training views captured by a fixed camera on a moving vehicle.
We propose a novel approach that enhances the capacity of 3DGS by leveraging prior from a Diffusion Model.
arXiv Detail & Related papers (2024-03-29T09:20:29Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Blind-Spot Collision Detection System for Commercial Vehicles Using
Multi Deep CNN Architecture [0.17499351967216337]
Two convolutional neural networks (CNNs) based on high-level feature descriptors are proposed to detect blind-spot collisions for heavy vehicles.
A fusion approach is proposed to integrate two pre-trained networks for extracting high level features for blind-spot vehicle detection.
The fusion of features significantly improves the performance of faster R-CNN and outperformed the existing state-of-the-art methods.
arXiv Detail & Related papers (2022-08-17T11:10:37Z) - Monocular Vision-based Prediction of Cut-in Maneuvers with LSTM Networks [0.0]
This study proposes a method to predict potentially dangerous cut-in maneuvers happening in the ego lane.
We follow a computer vision-based approach that only employs a single in-vehicle RGB camera.
Our algorithm consists of a CNN-based vehicle detection and tracking step and an LSTM-based maneuver classification step.
arXiv Detail & Related papers (2022-03-21T02:30:36Z) - RONELDv2: A faster, improved lane tracking method [1.3965477771846408]
Lane detection is an integral part of control systems in autonomous vehicles and lane departure warning systems.
This paper proposes an improved, lighter weight lane detection method, RONELDv2.
Experiments using the proposed improvements show a consistent increase in lane detection accuracy results across different datasets and deep learning models.
arXiv Detail & Related papers (2022-02-26T13:12:09Z) - Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study [38.65843674620544]
We introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help intelligent vehicles make better decisions.
A case study on lane change prediction is conducted to show the effectiveness of the proposed data fusion methodology.
arXiv Detail & Related papers (2021-12-07T23:42:21Z) - Model Guided Road Intersection Classification [2.9248680865344348]
This work investigates inter-section classification from RGB images using well-consolidate neural network approaches along with a method to enhance the results based on the teacher/student training paradigm.
An extensive experimental activity aimed at identifying the best input configuration and evaluating different network parameters on both the well-known KITTI dataset and the new KITTI-360 sequences shows that our method outperforms current state-of-the-art approaches on a per-frame basis and prove the effectiveness of the proposed learning scheme.
arXiv Detail & Related papers (2021-04-26T09:15:28Z) - Divide-and-Conquer for Lane-Aware Diverse Trajectory Prediction [71.97877759413272]
Trajectory prediction is a safety-critical tool for autonomous vehicles to plan and execute actions.
Recent methods have achieved strong performances using Multi-Choice Learning objectives like winner-takes-all (WTA) or best-of-many.
Our work addresses two key challenges in trajectory prediction, learning outputs, and better predictions by imposing constraints using driving knowledge.
arXiv Detail & Related papers (2021-04-16T17:58:56Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z) - Anchor-free Small-scale Multispectral Pedestrian Detection [88.7497134369344]
We propose a method for effective and efficient multispectral fusion of the two modalities in an adapted single-stage anchor-free base architecture.
We aim at learning pedestrian representations based on object center and scale rather than direct bounding box predictions.
Results show our method's effectiveness in detecting small-scaled pedestrians.
arXiv Detail & Related papers (2020-08-19T13:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.