Detecting Actionable Requests and Offers on Social Media During Crises Using LLMs
- URL: http://arxiv.org/abs/2504.16144v1
- Date: Tue, 22 Apr 2025 08:34:58 GMT
- Title: Detecting Actionable Requests and Offers on Social Media During Crises Using LLMs
- Authors: Ahmed El Fekih Zguir, Ferda Ofli, Muhammad Imran,
- Abstract summary: We propose a fine-grained hierarchical taxonomy to organize crisis-related information about requests and offers into three critical dimensions: supplies, emergency personnel, and actions.<n>We introduce Query-Specific Few-shot Learning (QSF Learning) that retrieves class-specific labeled examples from an embedding database to enhance the model's performance in detecting and classifying posts.
- Score: 8.17728833322492
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural disasters often result in a surge of social media activity, including requests for assistance, offers of help, sentiments, and general updates. To enable humanitarian organizations to respond more efficiently, we propose a fine-grained hierarchical taxonomy to systematically organize crisis-related information about requests and offers into three critical dimensions: supplies, emergency personnel, and actions. Leveraging the capabilities of Large Language Models (LLMs), we introduce Query-Specific Few-shot Learning (QSF Learning) that retrieves class-specific labeled examples from an embedding database to enhance the model's performance in detecting and classifying posts. Beyond classification, we assess the actionability of messages to prioritize posts requiring immediate attention. Extensive experiments demonstrate that our approach outperforms baseline prompting strategies, effectively identifying and prioritizing actionable requests and offers.
Related papers
- Multi-Stakeholder Disaster Insights from Social Media Using Large Language Models [1.6777183511743472]
Social media has emerged as a primary channel for users to promptly share feedback and issues during disasters and emergencies.<n>This paper presents a methodology that leverages the capabilities of LLMs to enhance disaster response and management.<n>Our approach combines classification techniques with generative AI to bridge the gap between raw user feedback and stakeholder-specific reports.
arXiv Detail & Related papers (2025-03-30T22:53:52Z) - Unsupervised Query Routing for Retrieval Augmented Generation [64.47987041500966]
We introduce a novel unsupervised method that constructs the "upper-bound" response to evaluate the quality of retrieval-augmented responses.
This evaluation enables the decision of the most suitable search engine for a given query.
By eliminating manual annotations, our approach can automatically process large-scale real user queries and create training data.
arXiv Detail & Related papers (2025-01-14T02:27:06Z) - CrisisSense-LLM: Instruction Fine-Tuned Large Language Model for Multi-label Social Media Text Classification in Disaster Informatics [49.2719253711215]
This study introduces a novel approach to disaster text classification by enhancing a pre-trained Large Language Model (LLM)<n>Our methodology involves creating a comprehensive instruction dataset from disaster-related tweets, which is then used to fine-tune an open-source LLM.<n>This fine-tuned model can classify multiple aspects of disaster-related information simultaneously, such as the type of event, informativeness, and involvement of human aid.
arXiv Detail & Related papers (2024-06-16T23:01:10Z) - Monitoring Critical Infrastructure Facilities During Disasters Using Large Language Models [8.17728833322492]
Critical Infrastructure Facilities (CIFs) are vital for the functioning of a community, especially during large-scale emergencies.
In this paper, we explore a potential application of Large Language Models (LLMs) to monitor the status of CIFs affected by natural disasters through information disseminated in social media networks.
We analyze social media data from two disaster events in two different countries to identify reported impacts to CIFs as well as their impact severity and operational status.
arXiv Detail & Related papers (2024-04-18T19:41:05Z) - Query-Dependent Prompt Evaluation and Optimization with Offline Inverse
RL [62.824464372594576]
We aim to enhance arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization.
We identify a previously overlooked objective of query dependency in such optimization.
We introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data.
arXiv Detail & Related papers (2023-09-13T01:12:52Z) - On the Role of Attention in Prompt-tuning [90.97555030446563]
We study prompt-tuning for one-layer attention architectures and study contextual mixture-models.
We show that softmax-prompt-attention is provably more expressive than softmax-self-attention and linear-prompt-attention.
We also provide experiments that verify our theoretical insights on real datasets and demonstrate how prompt-tuning enables the model to attend to context-relevant information.
arXiv Detail & Related papers (2023-06-06T06:23:38Z) - Coping with low data availability for social media crisis message
categorisation [3.0255457622022495]
This thesis focuses on addressing the challenge of low data availability when categorising crisis messages for emergency response.
It first presents domain adaptation as a solution for this problem, which involves learning a categorisation model from annotated data from past crisis events.
In many-to-many adaptation, where the model is trained on multiple past events and adapted to multiple ongoing events, a multi-task learning approach is proposed.
arXiv Detail & Related papers (2023-05-26T19:08:24Z) - Socratic Pretraining: Question-Driven Pretraining for Controllable
Summarization [89.04537372465612]
Socratic pretraining is a question-driven, unsupervised pretraining objective designed to improve controllability in summarization tasks.
Our results show that Socratic pretraining cuts task-specific labeled data requirements in half.
arXiv Detail & Related papers (2022-12-20T17:27:10Z) - Transformer-based Multi-task Learning for Disaster Tweet Categorisation [2.9112649816695204]
Social media has enabled people to circulate information in a timely fashion, thus motivating people to post messages seeking help during crisis situations.
These messages can contribute to the situational awareness of emergency responders, who have a need for them to be categorised according to information types.
We introduce a transformer-based multi-task learning (MTL) technique for classifying information types and estimating the priority of these messages.
arXiv Detail & Related papers (2021-10-15T11:13:46Z) - Predicting Themes within Complex Unstructured Texts: A Case Study on
Safeguarding Reports [66.39150945184683]
We focus on the problem of automatically identifying the main themes in a safeguarding report using supervised classification approaches.
Our results show the potential of deep learning models to simulate subject-expert behaviour even for complex tasks with limited labelled data.
arXiv Detail & Related papers (2020-10-27T19:48:23Z) - Unsupervised and Interpretable Domain Adaptation to Rapidly Filter
Tweets for Emergency Services [18.57009530004948]
We present a novel method to classify relevant tweets during an ongoing crisis using the publicly available dataset of TREC incident streams.
We use dedicated attention layers for each task to provide model interpretability; critical for real-word applications.
We show a practical implication of our work by providing a use-case for the COVID-19 pandemic.
arXiv Detail & Related papers (2020-03-04T06:40:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.