Automated Object Behavioral Feature Extraction for Potential Risk
Analysis based on Video Sensor
- URL: http://arxiv.org/abs/2107.03554v1
- Date: Thu, 8 Jul 2021 01:11:31 GMT
- Title: Automated Object Behavioral Feature Extraction for Potential Risk
Analysis based on Video Sensor
- Authors: Byeongjoon Noh, Wonjun Noh, David Lee, Hwasoo Yeo
- Abstract summary: Pedestrians are exposed to risk of death or serious injuries on roads, especially unsignalized crosswalks.
We propose an automated and simpler system for effectively extracting object behavioral features from video sensors deployed on the road.
This study demonstrates the potential for a network of connected video sensors to provide actionable data for smart cities.
- Score: 6.291501119156943
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Pedestrians are exposed to risk of death or serious injuries on roads,
especially unsignalized crosswalks, for a variety of reasons. To date, an
extensive variety of studies have reported on vision based traffic safety
system. However, many studies required manual inspection of the volumes of
traffic video to reliably obtain traffic related objects behavioral factors. In
this paper, we propose an automated and simpler system for effectively
extracting object behavioral features from video sensors deployed on the road.
We conduct basic statistical analysis on these features, and show how they can
be useful for monitoring the traffic behavior on the road. We confirm the
feasibility of the proposed system by applying our prototype to two
unsignalized crosswalks in Osan city, South Korea. To conclude, we compare
behaviors of vehicles and pedestrians in those two areas by simple statistical
analysis. This study demonstrates the potential for a network of connected
video sensors to provide actionable data for smart cities to improve pedestrian
safety in dangerous road environments.
Related papers
- Traffic and Safety Rule Compliance of Humans in Diverse Driving Situations [48.924085579865334]
Analyzing human data is crucial for developing autonomous systems that replicate safe driving practices.
This paper presents a comparative evaluation of human compliance with traffic and safety rules across multiple trajectory prediction datasets.
arXiv Detail & Related papers (2024-11-04T09:21:00Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - TAD: A Large-Scale Benchmark for Traffic Accidents Detection from Video
Surveillance [2.1076255329439304]
Existing datasets in traffic accidents are either small-scale, not from surveillance cameras, not open-sourced, or not built for freeway scenes.
After integration and annotation by various dimensions, a large-scale traffic accidents dataset named TAD is proposed in this work.
arXiv Detail & Related papers (2022-09-26T03:00:50Z) - Review on Action Recognition for Accident Detection in Smart City
Transportation Systems [0.0]
Monitoring traffic flows in a smart city using different surveillance cameras can play a significant role in recognizing accidents and alerting first responders.
The utilization of action recognition (AR) in computer vision tasks has contributed towards high-precision applications in video surveillance, medical imaging, and digital signal processing.
This paper provides potential research direction to develop and integrate accident detection systems for autonomous cars and public traffic safety systems.
arXiv Detail & Related papers (2022-08-20T03:21:44Z) - Safety-aware Motion Prediction with Unseen Vehicles for Autonomous
Driving [104.32241082170044]
We study a new task, safety-aware motion prediction with unseen vehicles for autonomous driving.
Unlike the existing trajectory prediction task for seen vehicles, we aim at predicting an occupancy map.
Our approach is the first one that can predict the existence of unseen vehicles in most cases.
arXiv Detail & Related papers (2021-09-03T13:33:33Z) - An Experimental Urban Case Study with Various Data Sources and a Model
for Traffic Estimation [65.28133251370055]
We organize an experimental campaign with video measurement in an area within the urban network of Zurich, Switzerland.
We focus on capturing the traffic state in terms of traffic flow and travel times by ensuring measurements from established thermal cameras.
We propose a simple yet efficient Multiple Linear Regression (MLR) model to estimate travel times with fusion of various data sources.
arXiv Detail & Related papers (2021-08-02T08:13:57Z) - Analyzing vehicle pedestrian interactions combining data cube structure
and predictive collision risk estimation model [5.73658856166614]
This study introduces a new concept of a pedestrian safety system that combines the field and the centralized processes.
The system can warn of upcoming risks immediately in the field and improve the safety of risk frequent areas by assessing the safety levels of roads without actual collisions.
arXiv Detail & Related papers (2021-07-26T23:00:56Z) - Interaction Detection Between Vehicles and Vulnerable Road Users: A Deep
Generative Approach with Attention [9.442285577226606]
We propose a conditional generative model for interaction detection at intersections.
It aims to automatically analyze massive video data about the continuity of road users' behavior.
The model's efficacy was validated by testing on real-world datasets.
arXiv Detail & Related papers (2021-05-09T10:03:55Z) - Vision based Pedestrian Potential Risk Analysis based on Automated
Behavior Feature Extraction for Smart and Safe City [5.759189800028578]
We propose a comprehensive analytical model for pedestrian potential risk using video footage gathered by road security cameras deployed at such crossings.
The proposed system automatically detects vehicles and pedestrians, calculates trajectories by frames, and extracts behavioral features affecting the likelihood of potentially dangerous scenes between these objects.
We validated feasibility and applicability by applying it in multiple crosswalks in Osan city, Korea.
arXiv Detail & Related papers (2021-05-06T11:03:10Z) - End-to-End Intersection Handling using Multi-Agent Deep Reinforcement
Learning [63.56464608571663]
Navigating through intersections is one of the main challenging tasks for an autonomous vehicle.
In this work, we focus on the implementation of a system able to navigate through intersections where only traffic signs are provided.
We propose a multi-agent system using a continuous, model-free Deep Reinforcement Learning algorithm used to train a neural network for predicting both the acceleration and the steering angle at each time step.
arXiv Detail & Related papers (2021-04-28T07:54:40Z) - Detecting 32 Pedestrian Attributes for Autonomous Vehicles [103.87351701138554]
In this paper, we address the problem of jointly detecting pedestrians and recognizing 32 pedestrian attributes.
We introduce a Multi-Task Learning (MTL) model relying on a composite field framework, which achieves both goals in an efficient way.
We show competitive detection and attribute recognition results, as well as a more stable MTL training.
arXiv Detail & Related papers (2020-12-04T15:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.