An embedded deep learning system for augmented reality in firefighting
applications
- URL: http://arxiv.org/abs/2009.10679v1
- Date: Tue, 22 Sep 2020 16:55:44 GMT
- Title: An embedded deep learning system for augmented reality in firefighting
applications
- Authors: Manish Bhattarai, Aura Rose Jensen-Curtis, Manel Mart\'iNez-Ram\'on
- Abstract summary: This research implements recent advancements in technology such as deep learning, point cloud and thermal imaging, and augmented reality platforms.
We have designed and built a prototype embedded system that can leverage data streamed from cameras built into a firefighter's personal protective equipment (PPE) to capture thermal, RGB color, and depth imagery.
The embedded system analyzes and returns the processed images via wireless streaming, where they can be viewed remotely and relayed back to the firefighter using an augmented reality platform.
- Score: 2.750124853532832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Firefighting is a dynamic activity, in which numerous operations occur
simultaneously. Maintaining situational awareness (i.e., knowledge of current
conditions and activities at the scene) is critical to the accurate
decision-making necessary for the safe and successful navigation of a fire
environment by firefighters. Conversely, the disorientation caused by hazards
such as smoke and extreme heat can lead to injury or even fatality. This
research implements recent advancements in technology such as deep learning,
point cloud and thermal imaging, and augmented reality platforms to improve a
firefighter's situational awareness and scene navigation through improved
interpretation of that scene. We have designed and built a prototype embedded
system that can leverage data streamed from cameras built into a firefighter's
personal protective equipment (PPE) to capture thermal, RGB color, and depth
imagery and then deploy already developed deep learning models to analyze the
input data in real time. The embedded system analyzes and returns the processed
images via wireless streaming, where they can be viewed remotely and relayed
back to the firefighter using an augmented reality platform that visualizes the
results of the analyzed inputs and draws the firefighter's attention to objects
of interest, such as doors and windows otherwise invisible through smoke and
flames.
Related papers
- Floor extraction and door detection for visually impaired guidance [78.94595951597344]
Finding obstacle-free paths in unknown environments is a big navigation issue for visually impaired people and autonomous robots.
New devices based on computer vision systems can help impaired people to overcome the difficulties of navigating in unknown environments in safe conditions.
In this work it is proposed a combination of sensors and algorithms that can lead to the building of a navigation system for visually impaired people.
arXiv Detail & Related papers (2024-01-30T14:38:43Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Image-Based Fire Detection in Industrial Environments with YOLOv4 [53.180678723280145]
This work looks into the potential of AI to detect and recognize fires and reduce detection time using object detection on an image stream.
To our end, we collected and labeled appropriate data from several public sources, which have been used to train and evaluate several models based on the popular YOLOv4 object detector.
arXiv Detail & Related papers (2022-12-09T11:32:36Z) - Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone
Racing [52.50284630866713]
Existing systems often require hand-engineered components for state estimation, planning, and control.
This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies.
arXiv Detail & Related papers (2022-10-26T19:03:17Z) - An Empirical Study of Remote Sensing Pretraining [117.90699699469639]
We conduct an empirical study of remote sensing pretraining (RSP) on aerial images.
RSP can help deliver distinctive performances in scene recognition tasks.
RSP mitigates the data discrepancies of traditional ImageNet pretraining on RS images, but it may still suffer from task discrepancies.
arXiv Detail & Related papers (2022-04-06T13:38:11Z) - Spatio-Temporal Split Learning for Autonomous Aerial Surveillance using
Urban Air Mobility (UAM) Networks [16.782309873372057]
This paper utilizes surveillance UAVs for the purpose of detecting the presence of a fire in the streets.
Spatio-temporal split learning is applied to this scenario to preserve privacy and globally train a fire classification model.
This paper explores the adequate number of clients and data ratios for split learning in this UAV setting, as well as the required network infrastructure.
arXiv Detail & Related papers (2021-11-15T01:39:31Z) - Meta-UDA: Unsupervised Domain Adaptive Thermal Object Detection using
Meta-Learning [64.92447072894055]
Infrared (IR) cameras are robust under adverse illumination and lighting conditions.
We propose an algorithm meta-learning framework to improve existing UDA methods.
We produce a state-of-the-art thermal detector for the KAIST and DSIAC datasets.
arXiv Detail & Related papers (2021-10-07T02:28:18Z) - Integrating Deep Learning and Augmented Reality to Enhance Situational
Awareness in Firefighting Environments [4.061135251278187]
We present a new four-pronged approach to build firefighter's situational awareness for the first time in the literature.
First, we used a deep Convolutional Neural Network (CNN) system to classify and identify objects of interest from thermal imagery in real-time.
Next, we extended this CNN framework for object detection, tracking, segmentation with a Mask RCNN framework, and scene description with a multimodal natural language processing(NLP) framework.
Third, we built a deep Q-learning-based agent, immune to stress-induced disorientation and anxiety, capable of making clear navigation decisions based on the observed
arXiv Detail & Related papers (2021-07-23T06:35:13Z) - Aerial Imagery Pile burn detection using Deep Learning: the FLAME
dataset [9.619617596045911]
FLAME (Fire Luminosity Airborne-based Machine learning Evaluation) offers a dataset of aerial images of fires.
This paper provides a fire image dataset collected by drones during a prescribed burning piled detritus in an Arizona pine forest.
The paper also highlights solutions to two machine learning problems: Binary classification of video frames based on the presence [and absence] of fire flames.
arXiv Detail & Related papers (2020-12-28T00:00:41Z) - A deep Q-Learning based Path Planning and Navigation System for
Firefighting Environments [3.24890820102255]
We propose a deep Q-learning based agent who is immune to stress induced disorientation and anxiety.
As a proof of concept, we imitate structural fire in a gaming engine called Unreal Engine.
We exploit experience replay to accelerate the learning process and augment the learning of the agent with human-derived experiences.
arXiv Detail & Related papers (2020-11-12T15:43:17Z) - A Novel Indoor Positioning System for unprepared firefighting scenarios [2.446948464551684]
This research implements a novel optical flow based video for compass orientation estimation and fused IMU data based activity recognition for Indoor Positioning Systems (IPS)
This technique helps first responders to go into unprepared, unknown environments and still maintain situational awareness like the orientation and, position of the victim fire fighters.
arXiv Detail & Related papers (2020-08-04T05:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.