A Novel Indoor Positioning System for unprepared firefighting scenarios
- URL: http://arxiv.org/abs/2008.01344v1
- Date: Tue, 4 Aug 2020 05:46:03 GMT
- Title: A Novel Indoor Positioning System for unprepared firefighting scenarios
- Authors: Vamsi Karthik Vadlamani, Manish Bhattarai, Meenu Ajith, Manel
Mart{\i}nez-Ramon
- Abstract summary: This research implements a novel optical flow based video for compass orientation estimation and fused IMU data based activity recognition for Indoor Positioning Systems (IPS)
This technique helps first responders to go into unprepared, unknown environments and still maintain situational awareness like the orientation and, position of the victim fire fighters.
- Score: 2.446948464551684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Situational awareness and Indoor location tracking for firefighters is one of
the tasks with paramount importance in search and rescue operations. For Indoor
Positioning systems (IPS), GPS is not the best possible solution. There are few
other techniques like dead reckoning, Wifi and bluetooth based triangulation,
Structure from Motion (SFM) based scene reconstruction for Indoor positioning
system. However due to high temperatures, the rapidly changing environment of
fires, and low parallax in the thermal images, these techniques are not
suitable for relaying the necessary information in a fire fighting environment
needed to increase situational awareness in real time. In fire fighting
environments, thermal imaging cameras are used due to smoke and low visibility
hence obtaining relative orientation from the vanishing point estimation is
very difficult. The following technique that is the content of this research
implements a novel optical flow based video compass for orientation estimation
and fused IMU data based activity recognition for IPS. This technique helps
first responders to go into unprepared, unknown environments and still maintain
situational awareness like the orientation and, position of the victim fire
fighters.
Related papers
- NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Fusion of Radio and Camera Sensor Data for Accurate Indoor Positioning [45.926983284834954]
We propose a novel positioning system, RAVEL, which fuses anonymous visual detections captured by widely available camera infrastructure, with radio readings.
Our experiments show that although the WiFi measurements are not by themselves sufficiently accurate, when they are fused with camera data, they become a catalyst for pulling together ambiguous, fragmented, and anonymous visual tracklets.
arXiv Detail & Related papers (2023-02-01T11:37:41Z) - Spatio-Temporal Split Learning for Autonomous Aerial Surveillance using
Urban Air Mobility (UAM) Networks [16.782309873372057]
This paper utilizes surveillance UAVs for the purpose of detecting the presence of a fire in the streets.
Spatio-temporal split learning is applied to this scenario to preserve privacy and globally train a fire classification model.
This paper explores the adequate number of clients and data ratios for split learning in this UAV setting, as well as the required network infrastructure.
arXiv Detail & Related papers (2021-11-15T01:39:31Z) - Meta-UDA: Unsupervised Domain Adaptive Thermal Object Detection using
Meta-Learning [64.92447072894055]
Infrared (IR) cameras are robust under adverse illumination and lighting conditions.
We propose an algorithm meta-learning framework to improve existing UDA methods.
We produce a state-of-the-art thermal detector for the KAIST and DSIAC datasets.
arXiv Detail & Related papers (2021-10-07T02:28:18Z) - Integrating Deep Learning and Augmented Reality to Enhance Situational
Awareness in Firefighting Environments [4.061135251278187]
We present a new four-pronged approach to build firefighter's situational awareness for the first time in the literature.
First, we used a deep Convolutional Neural Network (CNN) system to classify and identify objects of interest from thermal imagery in real-time.
Next, we extended this CNN framework for object detection, tracking, segmentation with a Mask RCNN framework, and scene description with a multimodal natural language processing(NLP) framework.
Third, we built a deep Q-learning-based agent, immune to stress-induced disorientation and anxiety, capable of making clear navigation decisions based on the observed
arXiv Detail & Related papers (2021-07-23T06:35:13Z) - Infrared Beacons for Robust Localization [58.720142291102135]
This paper presents a localization system that uses infrared beacons and a camera equipped with an optical band-pass filter.
Our system can reliably detect and identify individual beacons at 100m distance regardless of lighting conditions.
arXiv Detail & Related papers (2021-04-19T14:23:20Z) - Unsupervised Depth and Ego-motion Estimation for Monocular Thermal Video
using Multi-spectral Consistency Loss [76.77673212431152]
We propose an unsupervised learning method for the all-day depth and ego-motion estimation.
The proposed method exploits multi-spectral consistency loss to gives complementary supervision for the networks.
Networks trained with the proposed method robustly estimate the depth and pose from monocular thermal video under low-light and even zero-light conditions.
arXiv Detail & Related papers (2021-03-01T05:29:04Z) - Aerial Imagery Pile burn detection using Deep Learning: the FLAME
dataset [9.619617596045911]
FLAME (Fire Luminosity Airborne-based Machine learning Evaluation) offers a dataset of aerial images of fires.
This paper provides a fire image dataset collected by drones during a prescribed burning piled detritus in an Arizona pine forest.
The paper also highlights solutions to two machine learning problems: Binary classification of video frames based on the presence [and absence] of fire flames.
arXiv Detail & Related papers (2020-12-28T00:00:41Z) - Demo Abstract: Indoor Positioning System in Visually-Degraded
Environments with Millimetre-Wave Radar and Inertial Sensors [44.58134907168034]
We present a real-time indoor positioning system which fuses millimetre-wave (mmWave) radar and Inertial Measurement Units (IMU) data via deep sensor fusion.
Good accuracy and resilience were exhibited even in poorly illuminated scenes.
arXiv Detail & Related papers (2020-10-26T17:41:25Z) - An embedded deep learning system for augmented reality in firefighting
applications [2.750124853532832]
This research implements recent advancements in technology such as deep learning, point cloud and thermal imaging, and augmented reality platforms.
We have designed and built a prototype embedded system that can leverage data streamed from cameras built into a firefighter's personal protective equipment (PPE) to capture thermal, RGB color, and depth imagery.
The embedded system analyzes and returns the processed images via wireless streaming, where they can be viewed remotely and relayed back to the firefighter using an augmented reality platform.
arXiv Detail & Related papers (2020-09-22T16:55:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.