Estimation of Human Condition at Disaster Site Using Aerial Drone Images
- URL: http://arxiv.org/abs/2308.04535v1
- Date: Tue, 8 Aug 2023 18:57:01 GMT
- Title: Estimation of Human Condition at Disaster Site Using Aerial Drone Images
- Authors: Tomoki Arai, Kenji Iwata, Kensho Hara, Yutaka Satoh
- Abstract summary: We investigate a method to automatically estimate the damage status of people based on their actions in aerial drone images.
We constructed a new dataset of aerial images of human actions in a hypothetical disaster that occurred in an urban area.
The results showed that the status with characteristic human actions could be classified with a recall rate of more than 80%.
- Score: 10.271448515653276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Drones are being used to assess the situation in various disasters. In this
study, we investigate a method to automatically estimate the damage status of
people based on their actions in aerial drone images in order to understand
disaster sites faster and save labor. We constructed a new dataset of aerial
images of human actions in a hypothetical disaster that occurred in an urban
area, and classified the human damage status using 3D ResNet. The results
showed that the status with characteristic human actions could be classified
with a recall rate of more than 80%, while other statuses with similar human
actions could only be classified with a recall rate of about 50%. In addition,
a cloud-based VR presentation application suggested the effectiveness of using
drones to understand the disaster site and estimate the human condition.
Related papers
- Deploying Rapid Damage Assessments from sUAS Imagery for Disaster Response [3.441021278275805]
This paper presents the first AI/ML system for automating building damage assessment in uncrewed aerial systems (sUAS) imagery to be deployed operationally during federally declared disasters (Hurricanes Debby and Helene)<n>In response to major disasters, sUAS teams are dispatched to collect imagery of the affected areas to assess damage.<n>At recent disasters, teams collectively delivered between 47GB and 369GB of imagery per day, representing more imagery than can reasonably be transmitted or interpreted by subject matter experts in the disaster scene.<n>To alleviate this data avalanche encountered in practice, computer vision and machine learning techniques are
arXiv Detail & Related papers (2025-11-05T02:49:15Z) - BRIGHT: A globally distributed multimodal building damage assessment dataset with very-high-resolution for all-weather disaster response [50.76124284445902]
Building damage assessment (BDA) is an essential capability in the aftermath of a disaster to reduce human casualties.<n>Recent research focuses on the development of AI models to achieve accurate mapping of unseen disaster events.<n>We present a BDA dataset using veRy-hIGH-resoluTion optical and SAR imagery (BRIGHT) to support AI-based all-weather disaster response.
arXiv Detail & Related papers (2025-01-10T14:57:18Z) - Psych-Occlusion: Using Visual Psychophysics for Aerial Detection of Occluded Persons during Search and Rescue [41.03292974500013]
Small Unmanned Aerial Systems (sUAS) as "eyes in the sky" during Emergency Response (ER) scenarios.
efficient detection of persons from aerial views plays a crucial role in achieving a successful mission outcome.
Performance of Computer Vision (CV) models onboard sUAS substantially degrades under real-life rigorous conditions.
We exemplify the use of our behavioral dataset, Psych-ER, by using its human accuracy data to adapt the loss function of a detection model.
arXiv Detail & Related papers (2024-12-07T06:22:42Z) - Towards Efficient Disaster Response via Cost-effective Unbiased Class Rate Estimation through Neyman Allocation Stratified Sampling Active Learning [11.697034536189094]
We present an innovative algorithm that constructs Neyman stratified random sampling trees for binary classification.
Our findings demonstrate that our method surpasses both passive and conventional active learning techniques.
It effectively addresses the'sampling bias' challenge in traditional active learning strategies.
arXiv Detail & Related papers (2024-05-28T01:34:35Z) - Incidents1M: a large-scale dataset of images with natural disasters,
damage, and incidents [28.16346818821349]
Natural disasters, such as floods, tornadoes, or wildfires, are increasingly pervasive as the Earth undergoes global warming.
It is difficult to predict when and where an incident will occur, so timely emergency response is critical to saving the lives of those endangered by destructive events.
Social media posts can be used as a low-latency data source to understand the progression and aftermath of a disaster, yet parsing this data is tedious without automated methods.
In this work, we present the Incidents1M dataset, a large-scale multi-label dataset which contains 977,088 images, with 43 incident and 49 place categories.
arXiv Detail & Related papers (2022-01-11T23:03:57Z) - Damage Estimation and Localization from Sparse Aerial Imagery [0.0]
Much of post-disaster aerial imagery is still taken by handheld DSLR cameras from small, manned, fixed-wing aircraft.
We propose an approach to both detect damage in aerial images and localize it in world coordinates.
We evaluate the performance of our approach on post-event data from the 2016 Louisiana floods, and find that our approach achieves a precision of 88%.
arXiv Detail & Related papers (2021-11-05T19:12:15Z) - Attention Based Semantic Segmentation on UAV Dataset for Natural
Disaster Damage Assessment [0.7614628596146599]
We implement a novel self-attention based semantic segmentation model on a high resolution UAV dataset.
The result inspires to use self-attention schemes in natural disaster damage assessment which will save human lives and reduce economic losses.
arXiv Detail & Related papers (2021-05-30T13:39:03Z) - Generating Physically-Consistent Satellite Imagery for Climate Visualizations [53.61991820941501]
We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z) - Automatic Social Distance Estimation From Images: Performance
Evaluation, Test Benchmark, and Algorithm [78.88882860340797]
COVID-19 virus has caused a global pandemic since March 2020.
Maintaining a minimum of one meter distance from other people is strongly suggested to reduce the risk of infection.
There is no suitable test benchmark for such algorithms.
arXiv Detail & Related papers (2021-03-11T16:15:20Z) - Post-Hurricane Damage Assessment Using Satellite Imagery and Geolocation
Features [0.2538209532048866]
We propose a mixed data approach, which leverages publicly available satellite imagery and geolocation features of the affected area to identify damaged buildings after a hurricane.
The method demonstrated significant improvement from performing a similar task using only imagery features, based on a case study of Hurricane Harvey affecting Greater Houston area in 2017.
In this work, a creative choice of the geolocation features was made to provide extra information to the imagery features, but it is up to the users to decide which other features can be included to model the physical behavior of the events, depending on their domain knowledge and the type of disaster.
arXiv Detail & Related papers (2020-12-15T21:30:19Z) - Assessing out-of-domain generalization for robust building damage
detection [78.6363825307044]
Building damage detection can be automated by applying computer vision techniques to satellite imagery.
Models must be robust to a shift in distribution between disaster imagery available for training and the images of the new event.
We argue that future work should focus on the OOD regime instead.
arXiv Detail & Related papers (2020-11-20T10:30:43Z) - Physics-informed GANs for Coastal Flood Visualization [65.54626149826066]
We create a deep learning pipeline that generates visual satellite images of current and future coastal flooding.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
While this work focused on the visualization of coastal floods, we envision the creation of a global visualization of how climate change will shape our earth.
arXiv Detail & Related papers (2020-10-16T02:15:34Z) - Perceiving Humans: from Monocular 3D Localization to Social Distancing [93.03056743850141]
We present a new cost-effective vision-based method that perceives humans' locations in 3D and their body orientation from a single image.
We show that it is possible to rethink the concept of "social distancing" as a form of social interaction in contrast to a simple location-based rule.
arXiv Detail & Related papers (2020-09-01T10:12:30Z) - RescueNet: Joint Building Segmentation and Damage Assessment from
Satellite Imagery [83.49145695899388]
RescueNet is a unified model that can simultaneously segment buildings and assess the damage levels to individual buildings and can be trained end-to-end.
RescueNet is tested on the large scale and diverse xBD dataset and achieves significantly better building segmentation and damage classification performance than previous methods.
arXiv Detail & Related papers (2020-04-15T19:52:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.