DASC: Towards A Road Damage-Aware Social-Media-Driven Car Sensing
Framework for Disaster Response Applications
- URL: http://arxiv.org/abs/2006.02681v1
- Date: Thu, 4 Jun 2020 07:55:18 GMT
- Title: DASC: Towards A Road Damage-Aware Social-Media-Driven Car Sensing
Framework for Disaster Response Applications
- Authors: Md Tahmid Rashid and Daniel (Yue) Zhang and Dong Wang
- Abstract summary: We present DASC, a road Damage-Aware Social-media-driven Car sensing framework that exploits the collective power of social sensing and VSNs for reliable disaster response applications.
DASC distills signals emitted from social media and discovers the road damages to effectively drive cars to target areas for verifying emergency events.
The results of a real-world application demonstrate the superiority of DASC over current VSNs-based solutions in detection accuracy and efficiency.
- Score: 15.248427579924094
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While vehicular sensor networks (VSNs) have earned the stature of a mobile
sensing paradigm utilizing sensors built into cars, they have limited sensing
scopes since car drivers only opportunistically discover new events.
Conversely, social sensing is emerging as a new sensing paradigm where
measurements about the physical world are collected from humans. In contrast to
VSNs, social sensing is more pervasive, but one of its key limitations lies in
its inconsistent reliability stemming from the data contributed by unreliable
human sensors. In this paper, we present DASC, a road Damage-Aware
Social-media-driven Car sensing framework that exploits the collective power of
social sensing and VSNs for reliable disaster response applications. However,
integrating VSNs with social sensing introduces a new set of challenges: i) How
to leverage noisy and unreliable social signals to route the vehicles to
accurate regions of interest? ii) How to tackle the inconsistent availability
(e.g., churns) caused by car drivers being rational actors? iii) How to
efficiently guide the cars to the event locations with little prior knowledge
of the road damage caused by the disaster, while also handling the dynamics of
the physical world and social media? The DASC framework addresses the above
challenges by establishing a novel hybrid social-car sensing system that
employs techniques from game theory, feedback control, and Markov Decision
Process (MDP). In particular, DASC distills signals emitted from social media
and discovers the road damages to effectively drive cars to target areas for
verifying emergency events. We implement and evaluate DASC in a reputed vehicle
simulator that can emulate real-world disaster response scenarios. The results
of a real-world application demonstrate the superiority of DASC over current
VSNs-based solutions in detection accuracy and efficiency.
Related papers
- Cyber-Twin: Digital Twin-boosted Autonomous Attack Detection for Vehicular Ad-Hoc Networks [8.07947129445779]
The rapid evolution of Vehicular Ad-hoc NETworks (VANETs) has ushered in a transformative era for intelligent transportation systems (ITS)
VANETs are increasingly susceptible to cyberattacks, such as jamming and distributed denial of service (DDoS) attacks.
Existing methods face difficulties in detecting dynamic attacks and integrating digital twin technology and artificial intelligence (AI) models to enhance VANET cybersecurity.
This study proposes a novel framework that combines digital twin technology with AI to enhance the security of RSUs in VANETs.
arXiv Detail & Related papers (2024-01-25T08:05:41Z) - Smart Infrastructure: A Research Junction [5.172393727004225]
We introduce an intelligent research infrastructure equipped with visual sensor technology, located at a public inner-city junction in Aschaffenburg, Germany.
A multiple-view camera system monitors the traffic situation to perceive road users' behavior.
The system is used for research in data generation, evaluating new HAD sensors systems, algorithms, and Artificial Intelligence (AI) training strategies.
arXiv Detail & Related papers (2023-07-12T14:04:12Z) - FRIGATE: Frugal Spatio-temporal Forecasting on Road Networks [6.9035500229531745]
Existing works are built upon three assumptions that are not practical on real-world road networks.
We develop FRIGATE to address these shortcomings.
FRIGATE is powered by a-temporal Gnn that integrates positional, topological, and temporal representations into rich inductive representations.
arXiv Detail & Related papers (2023-06-14T06:28:26Z) - Enhancing Road Safety through Accurate Detection of Hazardous Driving
Behaviors with Graph Convolutional Recurrent Networks [0.2578242050187029]
We present a reliable Driving Behavior Detection (DBD) system based on Graph Convolutional Long Short-Term Memory Networks (GConvLSTM)
Our proposed model achieved a high accuracy of 97.5% for public sensors and an average accuracy of 98.1% for non-public sensors, indicating its consistency and accuracy in both settings.
Our findings demonstrate that the proposed system can effectively detect hazardous and unsafe driving behavior, with potential applications in improving road safety and reducing the number of accidents caused by driver errors.
arXiv Detail & Related papers (2023-05-08T21:05:36Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Cognitive Accident Prediction in Driving Scenes: A Multimodality
Benchmark [77.54411007883962]
We propose a Cognitive Accident Prediction (CAP) method that explicitly leverages human-inspired cognition of text description on the visual observation and the driver attention to facilitate model training.
CAP is formulated by an attentive text-to-vision shift fusion module, an attentive scene context transfer module, and the driver attention guided accident prediction module.
We construct a new large-scale benchmark consisting of 11,727 in-the-wild accident videos with over 2.19 million frames.
arXiv Detail & Related papers (2022-12-19T11:43:02Z) - Exploring Contextual Representation and Multi-Modality for End-to-End
Autonomous Driving [58.879758550901364]
Recent perception systems enhance spatial understanding with sensor fusion but often lack full environmental context.
We introduce a framework that integrates three cameras to emulate the human field of view, coupled with top-down bird-eye-view semantic data to enhance contextual representation.
Our method achieves displacement error by 0.67m in open-loop settings, surpassing current methods by 6.9% on the nuScenes dataset.
arXiv Detail & Related papers (2022-10-13T05:56:20Z) - Review on Action Recognition for Accident Detection in Smart City
Transportation Systems [0.0]
Monitoring traffic flows in a smart city using different surveillance cameras can play a significant role in recognizing accidents and alerting first responders.
The utilization of action recognition (AR) in computer vision tasks has contributed towards high-precision applications in video surveillance, medical imaging, and digital signal processing.
This paper provides potential research direction to develop and integrate accident detection systems for autonomous cars and public traffic safety systems.
arXiv Detail & Related papers (2022-08-20T03:21:44Z) - COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked
Vehicles [54.61668577827041]
We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving.
Our experiments on AutoCastSim suggest that our cooperative perception driving models lead to a 40% improvement in average success rate.
arXiv Detail & Related papers (2022-05-04T17:55:12Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.