Harnessing Large Language Models for Disaster Management: A Survey
- URL: http://arxiv.org/abs/2501.06932v1
- Date: Sun, 12 Jan 2025 21:00:50 GMT
- Title: Harnessing Large Language Models for Disaster Management: A Survey
- Authors: Zhenyu Lei, Yushun Dong, Weiyu Li, Rong Ding, Qi Wang, Jundong Li,
- Abstract summary: Large language models (LLMs) have revolutionized scientific research with their exceptional capabilities and transformed various fields.
This study aims to guide the professional community in developing advanced LLMs for disaster management to enhance the resilience against natural disasters.
- Score: 57.00123968209682
- License:
- Abstract: Large language models (LLMs) have revolutionized scientific research with their exceptional capabilities and transformed various fields. Among their practical applications, LLMs have been playing a crucial role in mitigating threats to human life, infrastructure, and the environment. Despite growing research in disaster LLMs, there remains a lack of systematic review and in-depth analysis of LLMs for natural disaster management. To address the gap, this paper presents a comprehensive survey of existing LLMs in natural disaster management, along with a taxonomy that categorizes existing works based on disaster phases and application scenarios. By collecting public datasets and identifying key challenges and opportunities, this study aims to guide the professional community in developing advanced LLMs for disaster management to enhance the resilience against natural disasters.
Related papers
- Practical Considerations for Agentic LLM Systems [5.455744338342196]
This paper frames actionable insights and considerations from the research community in the context of established application paradigms.
Namely, we position relevant research findings into four broad categories--Planning, Memory Tools, and Control Flow--based on common practices in application-focused literature.
arXiv Detail & Related papers (2024-12-05T11:57:49Z) - Navigating the Risks: A Survey of Security, Privacy, and Ethics Threats in LLM-Based Agents [67.07177243654485]
This survey collects and analyzes the different threats faced by large language models-based agents.
We identify six key features of LLM-based agents, based on which we summarize the current research progress.
We select four representative agents as case studies to analyze the risks they may face in practical use.
arXiv Detail & Related papers (2024-11-14T15:40:04Z) - Language Agents Meet Causality -- Bridging LLMs and Causal World Models [50.79984529172807]
We propose a framework that integrates causal representation learning with large language models.
This framework learns a causal world model, with causal variables linked to natural language expressions.
We evaluate the framework on causal inference and planning tasks across temporal scales and environmental complexities.
arXiv Detail & Related papers (2024-10-25T18:36:37Z) - CrisisSense-LLM: Instruction Fine-Tuned Large Language Model for Multi-label Social Media Text Classification in Disaster Informatics [49.2719253711215]
This study introduces a novel approach to disaster text classification by enhancing a pre-trained Large Language Model (LLM)
Our methodology involves creating a comprehensive instruction dataset from disaster-related tweets, which is then used to fine-tune an open-source LLM.
This fine-tuned model can classify multiple aspects of disaster-related information simultaneously, such as the type of event, informativeness, and involvement of human aid.
arXiv Detail & Related papers (2024-06-16T23:01:10Z) - Monitoring Critical Infrastructure Facilities During Disasters Using Large Language Models [8.17728833322492]
Critical Infrastructure Facilities (CIFs) are vital for the functioning of a community, especially during large-scale emergencies.
In this paper, we explore a potential application of Large Language Models (LLMs) to monitor the status of CIFs affected by natural disasters through information disseminated in social media networks.
We analyze social media data from two disaster events in two different countries to identify reported impacts to CIFs as well as their impact severity and operational status.
arXiv Detail & Related papers (2024-04-18T19:41:05Z) - From Text to Transformation: A Comprehensive Review of Large Language
Models' Versatility [4.17610395079782]
This study explores the expanse of Large Language Models (LLMs), such as Generative Pre-Trained Transformer (GPT) and Bidirectional Representations from Transformers (BERT) across varied domains.
Despite their established prowess in Natural Language Processing (NLP), these LLMs have not been systematically examined for their impact on domains such as fitness, and holistic well-being, urban planning, climate modelling and disaster management.
arXiv Detail & Related papers (2024-02-25T16:47:59Z) - On Catastrophic Inheritance of Large Foundation Models [51.41727422011327]
Large foundation models (LFMs) are claiming incredible performances. Yet great concerns have been raised about their mythic and uninterpreted potentials.
We propose to identify a neglected issue deeply rooted in LFMs: Catastrophic Inheritance.
We discuss the challenges behind this issue and propose UIM, a framework to understand the catastrophic inheritance of LFMs from both pre-training and downstream adaptation.
arXiv Detail & Related papers (2024-02-02T21:21:55Z) - Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language
Model Systems [29.828997665535336]
Large language models (LLMs) have strong capabilities in solving diverse natural language processing tasks.
However, the safety and security issues of LLM systems have become the major obstacle to their widespread application.
This paper proposes a comprehensive taxonomy, which systematically analyzes potential risks associated with each module of an LLM system.
arXiv Detail & Related papers (2024-01-11T09:29:56Z) - On the Risk of Misinformation Pollution with Large Language Models [127.1107824751703]
We investigate the potential misuse of modern Large Language Models (LLMs) for generating credible-sounding misinformation.
Our study reveals that LLMs can act as effective misinformation generators, leading to a significant degradation in the performance of Open-Domain Question Answering (ODQA) systems.
arXiv Detail & Related papers (2023-05-23T04:10:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.