E-Scooter Rider Detection and Classification in Dense Urban Environments
- URL: http://arxiv.org/abs/2205.10184v1
- Date: Fri, 20 May 2022 13:50:36 GMT
- Title: E-Scooter Rider Detection and Classification in Dense Urban Environments
- Authors: Shane Gilroy, Darragh Mullins, Edward Jones, Ashkan Parsi and Martin
Glavin
- Abstract summary: This research introduces a novel benchmark for partially occluded e-scooter rider detection to facilitate the objective characterization of detection models.
A novel, occlusion-aware method of e-scooter rider detection is presented that achieves a 15.93% improvement in detection performance over the current state of the art.
- Score: 5.606792370296115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate detection and classification of vulnerable road users is a safety
critical requirement for the deployment of autonomous vehicles in heterogeneous
traffic. Although similar in physical appearance to pedestrians, e-scooter
riders follow distinctly different characteristics of movement and can reach
speeds of up to 45kmph. The challenge of detecting e-scooter riders is
exacerbated in urban environments where the frequency of partial occlusion is
increased as riders navigate between vehicles, traffic infrastructure and other
road users. This can lead to the non-detection or mis-classification of
e-scooter riders as pedestrians, providing inaccurate information for accident
mitigation and path planning in autonomous vehicle applications. This research
introduces a novel benchmark for partially occluded e-scooter rider detection
to facilitate the objective characterization of detection models. A novel,
occlusion-aware method of e-scooter rider detection is presented that achieves
a 15.93% improvement in detection performance over the current state of the
art.
Related papers
- Evaluating Vision-Language Models for Zero-Shot Detection, Classification, and Association of Motorcycles, Passengers, and Helmets [0.0]
This study evaluates the efficacy of an advanced vision-language foundation model, OWLv2, in detecting and classifying various helmet-wearing statuses of motorcycle occupants using video data.
We employ a cascaded model approach for detection and classification tasks, integrating OWLv2 and CNN models.
The results highlight the potential of zero-shot learning to address challenges arising from incomplete and biased training datasets.
arXiv Detail & Related papers (2024-08-05T05:30:36Z) - On using Machine Learning Algorithms for Motorcycle Collision Detection [0.0]
Impact simulations show that the risk of severe injury or death in the event of a motorcycle-to-car impact can be greatly reduced if the motorcycle is equipped with passive safety measures such as airbags and seat belts.
For the challenge of reliably detecting impending collisions, this paper presents an investigation towards the applicability of machine learning algorithms.
arXiv Detail & Related papers (2024-03-14T15:32:25Z) - DenseLight: Efficient Control for Large-scale Traffic Signals with Dense
Feedback [109.84667902348498]
Traffic Signal Control (TSC) aims to reduce the average travel time of vehicles in a road network.
Most prior TSC methods leverage deep reinforcement learning to search for a control policy.
We propose DenseLight, a novel RL-based TSC method that employs an unbiased reward function to provide dense feedback on policy effectiveness.
arXiv Detail & Related papers (2023-06-13T05:58:57Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Risk assessment and mitigation of e-scooter crashes with naturalistic
driving data [2.862606936691229]
This paper presents a naturalistic driving study with a focus on e-scooter and vehicle encounters.
The goal is to quantitatively measure the behaviors of e-scooter riders in different encounters to help facilitate crash scenario modeling.
arXiv Detail & Related papers (2022-12-24T05:28:31Z) - A Wearable Data Collection System for Studying Micro-Level E-Scooter
Behavior in Naturalistic Road Environment [3.5466525046297264]
This paper proposes a wearable data collection system for investigating the micro-level e-Scooter motion behavior in a Naturalistic road environment.
An e-Scooter-based data acquisition system has been developed by integrating LiDAR, cameras, and GPS using the robot operating system (ROS)
arXiv Detail & Related papers (2022-12-22T18:58:54Z) - Detecting, Tracking and Counting Motorcycle Rider Traffic Violations on
Unconstrained Roads [27.351236436457445]
In many Asian countries with unconstrained road traffic conditions, driving violations such as not wearing helmets and triple-riding are a significant source of fatalities involving motorcycles.
We propose an approach for detecting, tracking, and counting motorcycle riding violations in videos taken from a vehicle-mounted dashboard camera.
arXiv Detail & Related papers (2022-04-18T15:17:40Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Driver Intention Anticipation Based on In-Cabin and Driving Scene
Monitoring [52.557003792696484]
We present a framework for the detection of the drivers' intention based on both in-cabin and traffic scene videos.
Our framework achieves a prediction with the accuracy of 83.98% and F1-score of 84.3%.
arXiv Detail & Related papers (2020-06-20T11:56:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.