Large language models in climate and sustainability policy: limits and opportunities
- URL: http://arxiv.org/abs/2502.02191v1
- Date: Tue, 04 Feb 2025 10:13:14 GMT
- Title: Large language models in climate and sustainability policy: limits and opportunities
- Authors: Francesca Larosa, Sergio Hoyas, H. Alberto Conejero, Javier Garcia-Martinez, Francesco Fuso Nerini, Ricardo Vinuesa,
- Abstract summary: We apply different NLP techniques, tools and approaches to climate and sustainability documents to derive policy-relevant and actionable measures.<n>We find that the use of LLMs is successful at processing, classifying and summarizing heterogeneous text-based data.<n>Our work presents a critical but empirically grounded application of LLMs to complex policy problems and suggests avenues to further expand Artificial Intelligence-powered computational social sciences.
- Score: 1.4843690728082002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As multiple crises threaten the sustainability of our societies and pose at risk the planetary boundaries, complex challenges require timely, updated, and usable information. Natural-language processing (NLP) tools enhance and expand data collection and processing and knowledge utilization capabilities to support the definition of an inclusive, sustainable future. In this work, we apply different NLP techniques, tools and approaches to climate and sustainability documents to derive policy-relevant and actionable measures. We focus on general and domain-specific large language models (LLMs) using a combination of static and prompt-based methods. We find that the use of LLMs is successful at processing, classifying and summarizing heterogeneous text-based data. However, we also encounter challenges related to human intervention across different workflow stages and knowledge utilization for policy processes. Our work presents a critical but empirically grounded application of LLMs to complex policy problems and suggests avenues to further expand Artificial Intelligence-powered computational social sciences.
Related papers
- Multi-Agent Reinforcement Learning Simulation for Environmental Policy Synthesis [5.738989367102034]
Climate policy development faces significant challenges due to deep uncertainty, complex system dynamics, and competing stakeholder interests.
We propose a framework for augmenting climate simulations with Multi-Agent Reinforcement Learning (MARL) to address these limitations.
arXiv Detail & Related papers (2025-04-17T09:18:04Z) - Urban Computing in the Era of Large Language Models [41.50492781046065]
This survey explores the intersection of Large Language Models (LLMs) and urban computing.
We provide a concise overview of the evolution and core technologies of LLMs.
We survey their applications across key urban domains, such as transportation, public safety, and environmental monitoring.
arXiv Detail & Related papers (2025-04-02T05:12:13Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)
This paper explores potential areas where statisticians can make important contributions to the development of LLMs.
We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - Political Events using RAG with LLMs [1.6385815610837167]
Large Language Models (LLMs) driven by Generative Artificial Intelligence (GenAI)
Retrieval-Augmented Generation (RAG) framework.
Political EE system, specifically tailored to extract political event information from news articles.
arXiv Detail & Related papers (2025-01-06T08:16:24Z) - Modular Conversational Agents for Surveys and Interviews [6.019313905775819]
This paper introduces a modular approach and its resulting parameterized process for designing AI agents.
We demonstrate the adaptability, generalizability, and efficacy of our modular approach through three empirical studies.
The results suggest that the AI agent increases completion rates and response quality.
arXiv Detail & Related papers (2024-12-22T15:00:16Z) - LLMs for Generalizable Language-Conditioned Policy Learning under Minimal Data Requirements [50.544186914115045]
This paper presents TEDUO, a novel training pipeline for offline language-conditioned policy learning.<n>TEDUO operates on easy-to-obtain, unlabeled datasets and is suited for the so-called in-the-wild evaluation, wherein the agent encounters previously unseen goals and states.
arXiv Detail & Related papers (2024-12-09T18:43:56Z) - Political-LLM: Large Language Models in Political Science [159.95299889946637]
Large language models (LLMs) have been widely adopted in political science tasks.<n>Political-LLM aims to advance the comprehensive understanding of integrating LLMs into computational political science.
arXiv Detail & Related papers (2024-12-09T08:47:50Z) - BloomWise: Enhancing Problem-Solving capabilities of Large Language Models using Bloom's-Taxonomy-Inspired Prompts [59.83547898874152]
We introduce BloomWise, a new prompting technique, inspired by Bloom's taxonomy, to improve the performance of Large Language Models (LLMs)
The decision regarding the need to employ more sophisticated cognitive skills is based on self-evaluation performed by the LLM.
In extensive experiments across 4 popular math reasoning datasets, we have demonstrated the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-10-05T09:27:52Z) - Knowledge Tagging with Large Language Model based Multi-Agent System [17.53518487546791]
This paper investigates the use of a multi-agent system to address the limitations of previous algorithms.<n>We highlight the significant potential of an LLM-based multi-agent system in overcoming the challenges that previous methods have encountered.
arXiv Detail & Related papers (2024-09-12T21:39:01Z) - A Comprehensive Review of Multimodal Large Language Models: Performance and Challenges Across Different Tasks [74.52259252807191]
Multimodal Large Language Models (MLLMs) address the complexities of real-world applications far beyond the capabilities of single-modality systems.
This paper systematically sorts out the applications of MLLM in multimodal tasks such as natural language, vision, and audio.
arXiv Detail & Related papers (2024-08-02T15:14:53Z) - A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges [60.546677053091685]
Large language models (LLMs) have unlocked novel opportunities for machine learning applications in the financial domain.
We explore the application of LLMs on various financial tasks, focusing on their potential to transform traditional practices and drive innovation.
We highlight this survey for categorizing the existing literature into key application areas, including linguistic tasks, sentiment analysis, financial time series, financial reasoning, agent-based modeling, and other applications.
arXiv Detail & Related papers (2024-06-15T16:11:35Z) - Scalable Language Model with Generalized Continual Learning [58.700439919096155]
The Joint Adaptive Re-ization (JARe) is integrated with Dynamic Task-related Knowledge Retrieval (DTKR) to enable adaptive adjustment of language models based on specific downstream tasks.
Our method demonstrates state-of-the-art performance on diverse backbones and benchmarks, achieving effective continual learning in both full-set and few-shot scenarios with minimal forgetting.
arXiv Detail & Related papers (2024-04-11T04:22:15Z) - Assessing Large Language Models on Climate Information [5.034118180129635]
We present a comprehensive evaluation framework grounded in science communication research to assess Large Language Models (LLMs)
Our framework emphasizes both presentational responses and adequacy, offering a fine-grained analysis of LLM generations spanning 8 dimensions and 30 issues.
We introduce a novel protocol for scalable oversight that relies on AI Assistance and raters with relevant education.
arXiv Detail & Related papers (2023-10-04T16:09:48Z) - Context-Aware Composition of Agent Policies by Markov Decision Process
Entity Embeddings and Agent Ensembles [1.124711723767572]
Computational agents support humans in many areas of life and are therefore found in heterogeneous contexts.
In order to perform services and carry out activities in a goal-oriented manner, agents require prior knowledge.
We propose a novel simulation-based approach that enables the representation of heterogeneous contexts.
arXiv Detail & Related papers (2023-08-28T12:13:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.