CReMa: Crisis Response through Computational Identification and Matching of Cross-Lingual Requests and Offers Shared on Social Media
- URL: http://arxiv.org/abs/2405.11897v2
- Date: Thu, 29 Aug 2024 23:45:48 GMT
- Title: CReMa: Crisis Response through Computational Identification and Matching of Cross-Lingual Requests and Offers Shared on Social Media
- Authors: Rabindra Lamsal, Maria Rodriguez Read, Shanika Karunasekera, Muhammad Imran,
- Abstract summary: In times of crisis, social media platforms play a crucial role in facilitating communication and coordinating resources.
We propose CReMa (Crisis Response Matcher), a systematic approach that integrates textual, temporal, and spatial features.
We introduce a novel multi-lingual dataset simulating help-seeking and offering assistance on social media in 16 languages.
- Score: 5.384787836425144
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: During times of crisis, social media platforms play a crucial role in facilitating communication and coordinating resources. In the midst of chaos and uncertainty, communities often rely on these platforms to share urgent pleas for help, extend support, and organize relief efforts. However, the overwhelming volume of conversations during such periods can escalate to unprecedented levels, necessitating the automated identification and matching of requests and offers to streamline relief operations. Additionally, there is a notable absence of studies conducted in multi-lingual settings, despite the fact that any geographical area can have a diverse linguistic population. Therefore, we propose CReMa (Crisis Response Matcher), a systematic approach that integrates textual, temporal, and spatial features to address the challenges of effectively identifying and matching requests and offers on social media platforms during emergencies. Our approach utilizes a crisis-specific pre-trained model and a multi-lingual embedding space. We emulate human decision-making to compute temporal and spatial features and non-linearly weigh the textual features. The results from our experiments are promising, outperforming strong baselines. Additionally, we introduce a novel multi-lingual dataset simulating help-seeking and offering assistance on social media in 16 languages and conduct comprehensive cross-lingual experiments. Furthermore, we analyze a million-scale geotagged global dataset to understand patterns in seeking help and offering assistance on social media. Overall, these contributions advance the field of crisis informatics and provide benchmarks for future research in the area.
Related papers
- Hypergame Theory for Decentralized Resource Allocation in Multi-user Semantic Communications [60.63472821600567]
A novel framework for decentralized computing and communication resource allocation in multiuser SC systems is proposed.
The challenge of efficiently allocating communication and computing resources is addressed through the application of Stackelberg hyper game theory.
Simulation results show that the proposed Stackelberg hyper game results in efficient usage of communication and computing resources.
arXiv Detail & Related papers (2024-09-26T15:55:59Z) - CrisisSense-LLM: Instruction Fine-Tuned Large Language Model for Multi-label Social Media Text Classification in Disaster Informatics [49.2719253711215]
This study introduces a novel approach to disaster text classification by enhancing a pre-trained Large Language Model (LLM)
Our methodology involves creating a comprehensive instruction dataset from disaster-related tweets, which is then used to fine-tune an open-source LLM.
This fine-tuned model can classify multiple aspects of disaster-related information simultaneously, such as the type of event, informativeness, and involvement of human aid.
arXiv Detail & Related papers (2024-06-16T23:01:10Z) - Against The Achilles' Heel: A Survey on Red Teaming for Generative Models [60.21722603260243]
The field of red teaming is experiencing fast-paced growth, which highlights the need for a comprehensive organization covering the entire pipeline.
Our extensive survey, which examines over 120 papers, introduces a taxonomy of fine-grained attack strategies grounded in the inherent capabilities of language models.
We have developed the searcher framework that unifies various automatic red teaming approaches.
arXiv Detail & Related papers (2024-03-31T09:50:39Z) - Coping with low data availability for social media crisis message
categorisation [3.0255457622022495]
This thesis focuses on addressing the challenge of low data availability when categorising crisis messages for emergency response.
It first presents domain adaptation as a solution for this problem, which involves learning a categorisation model from annotated data from past crisis events.
In many-to-many adaptation, where the model is trained on multiple past events and adapted to multiple ongoing events, a multi-task learning approach is proposed.
arXiv Detail & Related papers (2023-05-26T19:08:24Z) - CrisisLTLSum: A Benchmark for Local Crisis Event Timeline Extraction and
Summarization [62.77066949111921]
This paper presents CrisisLTLSum, the largest dataset of local crisis event timelines available to date.
CrisisLTLSum contains 1,000 crisis event timelines across four domains: wildfires, local fires, traffic, and storms.
Our initial experiments indicate a significant gap between the performance of strong baselines compared to the human performance on both tasks.
arXiv Detail & Related papers (2022-10-25T17:32:40Z) - Cross-Lingual and Cross-Domain Crisis Classification for Low-Resource
Scenarios [4.147346416230273]
We study the task of automatically classifying messages related to crisis events by leveraging cross-language and cross-domain labeled data.
Our goal is to make use of labeled data from high-resource languages to classify messages from other (low-resource) languages and/or of new (previously unseen) types of crisis situations.
Our empirical findings show that it is indeed possible to leverage data from crisis events in English to classify the same type of event in other languages, such as Spanish and Italian.
arXiv Detail & Related papers (2022-09-05T20:57:23Z) - Analyzing social media with crowdsourcing in Crowd4SDG [1.1403672224109254]
This study presents an approach that provides flexible support for analyzing social media, particularly during emergencies.
The focus is on analyzing images and text contained in social media posts and a set of automatic data processing tools for filtering, classification, and geolocation of content.
Such support includes both feedback and suggestions to configure automated tools, and crowdsourcing to gather inputs from citizens.
arXiv Detail & Related papers (2022-08-04T14:42:20Z) - Cross-Lingual Query-Based Summarization of Crisis-Related Social Media:
An Abstractive Approach Using Transformers [3.042890194004583]
This work proposes a cross-lingual method for retrieving and summarizing crisis-relevant information from social media postings.
We describe a uniform way of expressing various information needs through structured queries and a way of creating summaries.
arXiv Detail & Related papers (2022-04-21T16:07:52Z) - Clustering of Social Media Messages for Humanitarian Aid Response during
Crisis [47.187609203210705]
We show that recent advances in Deep Learning and Natural Language Processing outperform prior approaches for the task of classifying informativeness.
We extend these methods to two sub-tasks of informativeness and find that the Deep Learning methods are effective here as well.
arXiv Detail & Related papers (2020-07-23T02:18:05Z) - Multimodal Categorization of Crisis Events in Social Media [81.07061295887172]
We present a new multimodal fusion method that leverages both images and texts as input.
In particular, we introduce a cross-attention module that can filter uninformative and misleading components from weak modalities.
We show that our method outperforms the unimodal approaches and strong multimodal baselines by a large margin on three crisis-related tasks.
arXiv Detail & Related papers (2020-04-10T06:31:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.