Enhancing Roadway Safety: LiDAR-based Tree Clearance Analysis
- URL: http://arxiv.org/abs/2402.18309v1
- Date: Wed, 28 Feb 2024 13:08:46 GMT
- Title: Enhancing Roadway Safety: LiDAR-based Tree Clearance Analysis
- Authors: Miriam Louise Carnot, Eric Peukert, Bogdan Franczyk
- Abstract summary: Trees or other vegetation is growing above roadways, blocking the sight of traffic signs and lights and posing danger to traffic participants.
This is where LiDAR technology comes into play, a laser scanning sensor that reveals a three-dimensional perspective.
We present a new point cloud algorithm that can automatically detect those parts of the trees that grow over the street and need to be trimmed.
- Score: 0.2877502288155167
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the efforts for safer roads, ensuring adequate vertical clearance above
roadways is of great importance. Frequently, trees or other vegetation is
growing above the roads, blocking the sight of traffic signs and lights and
posing danger to traffic participants. Accurately estimating this space from
simple images proves challenging due to a lack of depth information. This is
where LiDAR technology comes into play, a laser scanning sensor that reveals a
three-dimensional perspective. Thus far, LiDAR point clouds at the street level
have mainly been used for applications in the field of autonomous driving.
These scans, however, also open up possibilities in urban management. In this
paper, we present a new point cloud algorithm that can automatically detect
those parts of the trees that grow over the street and need to be trimmed. Our
system uses semantic segmentation to filter relevant points and downstream
processing steps to create the required volume to be kept clear above the road.
Challenges include obscured stretches of road, the noisy unstructured nature of
LiDAR point clouds, and the assessment of the road shape. The identified points
of non-compliant trees can be projected from the point cloud onto images,
providing municipalities with a visual aid for dealing with such occurrences.
By automating this process, municipalities can address potential road space
constraints, enhancing safety for all. They may also save valuable time by
carrying out the inspections more systematically. Our open-source code gives
communities inspiration on how to automate the process themselves.
Related papers
- LidaRF: Delving into Lidar for Neural Radiance Field on Street Scenes [73.65115834242866]
Photorealistic simulation plays a crucial role in applications such as autonomous driving.
However, reconstruction quality suffers on street scenes due to collinear camera motions and sparser samplings at higher speeds.
We propose several insights that allow a better utilization of Lidar data to improve NeRF quality on street scenes.
arXiv Detail & Related papers (2024-05-01T23:07:12Z) - RoadRunner -- Learning Traversability Estimation for Autonomous Off-road Driving [13.101416329887755]
We present RoadRunner, a framework capable of predicting terrain traversability and an elevation map directly from camera and LiDAR sensor inputs.
RoadRunner enables reliable autonomous navigation, by fusing sensory information, handling of uncertainty, and generation of contextually informed predictions.
We demonstrate the effectiveness of RoadRunner in enabling safe and reliable off-road navigation at high speeds in multiple real-world driving scenarios through unstructured desert environments.
arXiv Detail & Related papers (2024-02-29T16:47:54Z) - MSight: An Edge-Cloud Infrastructure-based Perception System for
Connected Automated Vehicles [58.461077944514564]
This paper presents MSight, a cutting-edge roadside perception system specifically designed for automated vehicles.
MSight offers real-time vehicle detection, localization, tracking, and short-term trajectory prediction.
Evaluations underscore the system's capability to uphold lane-level accuracy with minimal latency.
arXiv Detail & Related papers (2023-10-08T21:32:30Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Explainable, automated urban interventions to improve pedestrian and
vehicle safety [0.8620335948752805]
This paper combines public data sources, large-scale street imagery and computer vision techniques to approach pedestrian and vehicle safety.
The steps involved in this pipeline include the adaptation and training of a Residual Convolutional Neural Network to determine a hazard index for each given urban scene.
The outcome of this computational approach is a fine-grained map of hazard levels across a city, and an identify interventions that might simultaneously improve pedestrian and vehicle safety.
arXiv Detail & Related papers (2021-10-22T09:17:39Z) - Road Network Guided Fine-Grained Urban Traffic Flow Inference [108.64631590347352]
Accurate inference of fine-grained traffic flow from coarse-grained one is an emerging yet crucial problem.
We propose a novel Road-Aware Traffic Flow Magnifier (RATFM) that exploits the prior knowledge of road networks.
Our method can generate high-quality fine-grained traffic flow maps.
arXiv Detail & Related papers (2021-09-29T07:51:49Z) - Evaluating Computer Vision Techniques for Urban Mobility on Large-Scale,
Unconstrained Roads [25.29906312974705]
This paper proposes a simple mobile imaging setup to address several common problems in road safety at scale.
We use recent computer vision techniques to identify possible irregularities on roads.
We also demonstrate the mobile imaging solution's applicability to spot traffic violations.
arXiv Detail & Related papers (2021-09-11T09:07:56Z) - CP-loss: Connectivity-preserving Loss for Road Curb Detection in
Autonomous Driving with Aerial Images [10.300623192980753]
Road curb detection is important for autonomous driving.
Most of the current methods detect road curbs online using vehicle-mounted sensors, such as cameras or 3-D Lidars.
In this paper, we detect road curbs offline using high-resolution aerial images.
arXiv Detail & Related papers (2021-07-26T01:36:58Z) - Convolutional Recurrent Network for Road Boundary Extraction [99.55522995570063]
We tackle the problem of drivable road boundary extraction from LiDAR and camera imagery.
We design a structured model where a fully convolutional network obtains deep features encoding the location and direction of road boundaries.
We showcase the effectiveness of our method on a large North American city where we obtain perfect topology of road boundaries 99.3% of the time.
arXiv Detail & Related papers (2020-12-21T18:59:12Z) - Where can I drive? A System Approach: Deep Ego Corridor Estimation for
Robust Automated Driving [2.378161932344701]
We propose to classify specifically a drivable corridor of the ego lane on pixel level with a deep learning approach.
Our approach is kept computationally efficient with only 0.66 million parameters allowing its application in large scale products.
arXiv Detail & Related papers (2020-04-16T13:04:18Z) - Depth Sensing Beyond LiDAR Range [84.19507822574568]
We propose a novel three-camera system that utilizes small field of view cameras.
Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration.
It can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances.
arXiv Detail & Related papers (2020-04-07T00:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.