Multi-Stakeholder Disaster Insights from Social Media Using Large Language Models
- URL: http://arxiv.org/abs/2504.00046v2
- Date: Thu, 17 Apr 2025 11:29:06 GMT
- Title: Multi-Stakeholder Disaster Insights from Social Media Using Large Language Models
- Authors: Loris Belcastro, Cristian Cosentino, Fabrizio Marozzo, Merve Gündüz-Cüre, Sule Öztürk-Birim,
- Abstract summary: Social media has emerged as a primary channel for users to promptly share feedback and issues during disasters and emergencies.<n>This paper presents a methodology that leverages the capabilities of LLMs to enhance disaster response and management.<n>Our approach combines classification techniques with generative AI to bridge the gap between raw user feedback and stakeholder-specific reports.
- Score: 1.6777183511743472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, social media has emerged as a primary channel for users to promptly share feedback and issues during disasters and emergencies, playing a key role in crisis management. While significant progress has been made in collecting and analyzing social media content, there remains a pressing need to enhance the automation, aggregation, and customization of this data to deliver actionable insights tailored to diverse stakeholders, including the press, police, EMS, and firefighters. This effort is essential for improving the coordination of activities such as relief efforts, resource distribution, and media communication. This paper presents a methodology that leverages the capabilities of LLMs to enhance disaster response and management. Our approach combines classification techniques with generative AI to bridge the gap between raw user feedback and stakeholder-specific reports. Social media posts shared during catastrophic events are analyzed with a focus on user-reported issues, service interruptions, and encountered challenges. We employ full-spectrum LLMs, using analytical models like BERT for precise, multi-dimensional classification of content type, sentiment, emotion, geolocation, and topic. Generative models such as ChatGPT are then used to produce human-readable, informative reports tailored to distinct audiences, synthesizing insights derived from detailed classifications. We compare standard approaches, which analyze posts directly using prompts in ChatGPT, to our advanced method, which incorporates multi-dimensional classification, sub-event selection, and tailored report generation. Our methodology demonstrates superior performance in both quantitative metrics, such as text coherence scores and latent representations, and qualitative assessments by automated tools and field experts, delivering precise insights for diverse disaster response stakeholders.
Related papers
- Transit Pulse: Utilizing Social Media as a Source for Customer Feedback and Information Extraction with Large Language Model [12.6020349733674]
We propose a novel approach to extracting and analyzing transit-related information.
Our method employs Large Language Models (LLM), specifically Llama 3, for a streamlined analysis.
Our results demonstrate the potential of LLMs to transform social media data analysis in the public transit domain.
arXiv Detail & Related papers (2024-10-19T07:08:40Z) - A Social Context-aware Graph-based Multimodal Attentive Learning Framework for Disaster Content Classification during Emergencies [0.0]
CrisisSpot is a method that captures complex relationships between textual and visual modalities.
IDEA captures both harmonious and contrasting patterns within the data to enhance multimodal interactions.
CrisisSpot achieved an average F1-score gain of 9.45% and 5.01% compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-10-11T13:51:46Z) - CrisisSense-LLM: Instruction Fine-Tuned Large Language Model for Multi-label Social Media Text Classification in Disaster Informatics [49.2719253711215]
This study introduces a novel approach to disaster text classification by enhancing a pre-trained Large Language Model (LLM)<n>Our methodology involves creating a comprehensive instruction dataset from disaster-related tweets, which is then used to fine-tune an open-source LLM.<n>This fine-tuned model can classify multiple aspects of disaster-related information simultaneously, such as the type of event, informativeness, and involvement of human aid.
arXiv Detail & Related papers (2024-06-16T23:01:10Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - ManiTweet: A New Benchmark for Identifying Manipulation of News on Social Media [74.93847489218008]
We present a novel task, identifying manipulation of news on social media, which aims to detect manipulation in social media posts and identify manipulated or inserted information.
To study this task, we have proposed a data collection schema and curated a dataset called ManiTweet, consisting of 3.6K pairs of tweets and corresponding articles.
Our analysis demonstrates that this task is highly challenging, with large language models (LLMs) yielding unsatisfactory performance.
arXiv Detail & Related papers (2023-05-23T16:40:07Z) - Transformer-based Multi-task Learning for Disaster Tweet Categorisation [2.9112649816695204]
Social media has enabled people to circulate information in a timely fashion, thus motivating people to post messages seeking help during crisis situations.
These messages can contribute to the situational awareness of emergency responders, who have a need for them to be categorised according to information types.
We introduce a transformer-based multi-task learning (MTL) technique for classifying information types and estimating the priority of these messages.
arXiv Detail & Related papers (2021-10-15T11:13:46Z) - Author Clustering and Topic Estimation for Short Texts [69.54017251622211]
We propose a novel model that expands on the Latent Dirichlet Allocation by modeling strong dependence among the words in the same document.
We also simultaneously cluster users, removing the need for post-hoc cluster estimation.
Our method performs as well as -- or better -- than traditional approaches to problems arising in short text.
arXiv Detail & Related papers (2021-06-15T20:55:55Z) - Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and
Context-Aware Auto-Encoders [59.038157066874255]
We propose a novel framework called RankAE to perform chat summarization without employing manually labeled data.
RankAE consists of a topic-oriented ranking strategy that selects topic utterances according to centrality and diversity simultaneously.
A denoising auto-encoder is designed to generate succinct but context-informative summaries based on the selected utterances.
arXiv Detail & Related papers (2020-12-14T07:31:17Z) - Multimodal Categorization of Crisis Events in Social Media [81.07061295887172]
We present a new multimodal fusion method that leverages both images and texts as input.
In particular, we introduce a cross-attention module that can filter uninformative and misleading components from weak modalities.
We show that our method outperforms the unimodal approaches and strong multimodal baselines by a large margin on three crisis-related tasks.
arXiv Detail & Related papers (2020-04-10T06:31:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.