GaLeNet: Multimodal Learning for Disaster Prediction, Management and
Relief
- URL: http://arxiv.org/abs/2206.09242v1
- Date: Sat, 18 Jun 2022 16:45:57 GMT
- Title: GaLeNet: Multimodal Learning for Disaster Prediction, Management and
Relief
- Authors: Rohit Saha, Mengyi Fang, Angeline Yasodhara, Kyryl Truskovskyi, Azin
Asgarian, Daniel Homola, Raahil Shah, Frederik Dieleman, Jack Weatheritt,
Thomas Rogers
- Abstract summary: We propose a multimodal framework (GaLeNet) for assessing the severity of damage by complementing pre-disaster images with weather data and the trajectory of the hurricane.
We show that GaLeNet can leverage pre-disaster images in the absence of post-disaster images, preventing substantial delays in decision making.
- Score: 2.007385102792989
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: After a natural disaster, such as a hurricane, millions are left in need of
emergency assistance. To allocate resources optimally, human planners need to
accurately analyze data that can flow in large volumes from several sources.
This motivates the development of multimodal machine learning frameworks that
can integrate multiple data sources and leverage them efficiently. To date, the
research community has mainly focused on unimodal reasoning to provide granular
assessments of the damage. Moreover, previous studies mostly rely on
post-disaster images, which may take several days to become available. In this
work, we propose a multimodal framework (GaLeNet) for assessing the severity of
damage by complementing pre-disaster images with weather data and the
trajectory of the hurricane. Through extensive experiments on data from two
hurricanes, we demonstrate (i) the merits of multimodal approaches compared to
unimodal methods, and (ii) the effectiveness of GaLeNet at fusing various
modalities. Furthermore, we show that GaLeNet can leverage pre-disaster images
in the absence of post-disaster images, preventing substantial delays in
decision making.
Related papers
- Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector [97.92369017531038]
We build a new laRge-scale Adervsarial images dataset with Diverse hArmful Responses (RADAR)
We then develop a novel iN-time Embedding-based AdveRSarial Image DEtection (NEARSIDE) method, which exploits a single vector that distilled from the hidden states of Visual Language Models (VLMs) to achieve the detection of adversarial images against benign ones in the input.
arXiv Detail & Related papers (2024-10-30T10:33:10Z) - Inter-slice Super-resolution of Magnetic Resonance Images by Pre-training and Self-supervised Fine-tuning [49.197385954021456]
In clinical practice, 2D magnetic resonance (MR) sequences are widely adopted. While individual 2D slices can be stacked to form a 3D volume, the relatively large slice spacing can pose challenges for visualization and subsequent analysis tasks.
To reduce slice spacing, deep-learning-based super-resolution techniques are widely investigated.
Most current solutions require a substantial number of paired high-resolution and low-resolution images for supervised training, which are typically unavailable in real-world scenarios.
arXiv Detail & Related papers (2024-06-10T02:20:26Z) - Robust Disaster Assessment from Aerial Imagery Using Text-to-Image Synthetic Data [66.49494950674402]
We leverage emerging text-to-image generative models in creating large-scale synthetic supervision for the task of damage assessment from aerial images.
We build an efficient and easily scalable pipeline to generate thousands of post-disaster images from low-resource domains.
We validate the strength of our proposed framework under cross-geography domain transfer setting from xBD and SKAI images in both single-source and multi-source settings.
arXiv Detail & Related papers (2024-05-22T16:07:05Z) - A Multi-constraint and Multi-objective Allocation Model for Emergency Rescue in IoT Environment [3.8572535126902676]
We've developed the Multi-Objective Shuffled Gray Froging Model (MSGWFLM)
This multi-objective resource allocation model has been rigorously tested against 28 diverse challenges.
It's effectiveness is particularly notable in complex, multi-cycle emergency rescue scenarios.
arXiv Detail & Related papers (2024-03-15T13:42:00Z) - CrisisMatch: Semi-Supervised Few-Shot Learning for Fine-Grained Disaster
Tweet Classification [51.58605842457186]
We present a fine-grained disaster tweet classification model under the semi-supervised, few-shot learning setting.
Our model, CrisisMatch, effectively classifies tweets into fine-grained classes of interest using few labeled data and large amounts of unlabeled data.
arXiv Detail & Related papers (2023-10-23T07:01:09Z) - Deep Learning and Image Super-Resolution-Guided Beam and Power
Allocation for mmWave Networks [80.37827344656048]
We develop a deep learning (DL)-guided hybrid beam and power allocation approach for millimeter-wave (mmWave) networks.
We exploit the synergy of supervised learning and super-resolution technology to enable low-overhead beam- and power allocation.
arXiv Detail & Related papers (2023-05-08T05:40:54Z) - A Machine learning approach for rapid disaster response based on
multi-modal data. The case of housing & shelter needs [0.0]
One of the most immediate needs of people affected by a disaster is finding shelter.
This paper proposes a machine learning workflow that aims to fuse and rapidly analyse multimodal data.
Based on a database of 19 characteristics for more than 200 disasters worldwide, a fusion approach at the decision level was used.
arXiv Detail & Related papers (2021-07-29T18:22:34Z) - Learning from Multimodal and Multitemporal Earth Observation Data for
Building Damage Mapping [17.324397643429638]
We have developed a global multisensor and multitemporal dataset for building damage mapping.
The global dataset contains high-resolution optical imagery and high-to-moderate-resolution multiband SAR data.
We defined a damage mapping framework for the semantic segmentation of damaged buildings based on a deep convolutional neural network algorithm.
arXiv Detail & Related papers (2020-09-14T05:04:19Z) - Improving Emergency Response during Hurricane Season using Computer
Vision [0.06882042556551608]
We have developed a framework for crisis response and management that incorporates the latest technologies in computer vision (CV), inland flood prediction, damage assessment and data visualization.
Our computer-vision model analyzes spaceborne and airborne imagery to detect relevant features during and after a natural disaster.
We have designed an ensemble of models to identify features including water, roads, buildings, and vegetation from the imagery.
arXiv Detail & Related papers (2020-08-17T15:42:02Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - Deep Learning-based Aerial Image Segmentation with Open Data for
Disaster Impact Assessment [11.355723874379317]
A framework utilising segmentation neural networks is proposed to identify impacted areas and accessible roads in post-disaster scenarios.
The effectiveness of pretraining with ImageNet on the task of aerial image segmentation has been analysed.
Experiments on data from the 2018 tsunami that struck Palu, Indonesia show the effectiveness of the proposed framework.
arXiv Detail & Related papers (2020-06-10T00:19:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.