Climate-Eval: A Comprehensive Benchmark for NLP Tasks Related to Climate Change
- URL: http://arxiv.org/abs/2505.18653v1
- Date: Sat, 24 May 2025 11:45:46 GMT
- Title: Climate-Eval: A Comprehensive Benchmark for NLP Tasks Related to Climate Change
- Authors: Murathan Kurfalı, Shorouq Zahra, Joakim Nivre, Gabriele Messori,
- Abstract summary: Climate-Eval aggregates existing datasets along with a newly developed news classification dataset.<n>This results in a benchmark of 25 tasks based on 13 datasets, covering key aspects of climate discourse.
- Score: 2.8680187920555635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Climate-Eval is a comprehensive benchmark designed to evaluate natural language processing models across a broad range of tasks related to climate change. Climate-Eval aggregates existing datasets along with a newly developed news classification dataset, created specifically for this release. This results in a benchmark of 25 tasks based on 13 datasets, covering key aspects of climate discourse, including text classification, question answering, and information extraction. Our benchmark provides a standardized evaluation suite for systematically assessing the performance of large language models (LLMs) on these tasks. Additionally, we conduct an extensive evaluation of open-source LLMs (ranging from 2B to 70B parameters) in both zero-shot and few-shot settings, analyzing their strengths and limitations in the domain of climate change.
Related papers
- WXImpactBench: A Disruptive Weather Impact Understanding Benchmark for Evaluating Large Language Models [3.9711303420034443]
WXImpactBench is the first benchmark for evaluating the capacity of large language models (LLMs) on disruptive weather impacts.<n>The constructed dataset and the code for the evaluation framework are available to help society protect against vulnerabilities from disasters.
arXiv Detail & Related papers (2025-05-26T17:23:29Z) - ClimateBench-M: A Multi-Modal Climate Data Benchmark with a Simple Generative Method [61.76389719956301]
We contribute a multi-modal climate benchmark, i.e., ClimateBench-M, which aligns time series climate data from ERA5, extreme weather events data from NOAA, and satellite image data from NASA.<n>Under each data modality, we also propose a simple but strong generative method that could produce competitive performance in weather forecasting, thunderstorm alerts, and crop segmentation tasks.
arXiv Detail & Related papers (2025-04-10T02:22:23Z) - ClimaQA: An Automated Evaluation Framework for Climate Question Answering Models [38.05357439484919]
We develop ClimaGen, an adaptive learning framework that generates question-answer pairs from graduate textbooks with climate scientists in the loop.<n>We present ClimaQA-Gold, an expert-annotated benchmark dataset alongside ClimaQA-Silver, a large-scale, comprehensive synthetic QA dataset for climate science.
arXiv Detail & Related papers (2024-10-22T05:12:19Z) - Combining Observational Data and Language for Species Range Estimation [63.65684199946094]
We propose a novel approach combining millions of citizen science species observations with textual descriptions from Wikipedia.<n>Our framework maps locations, species, and text descriptions into a common space, enabling zero-shot range estimation from textual descriptions.<n>Our approach also acts as a strong prior when combined with observational data, resulting in more accurate range estimation with less data.
arXiv Detail & Related papers (2024-10-14T17:22:55Z) - Are Large Language Models Good Classifiers? A Study on Edit Intent Classification in Scientific Document Revisions [62.12545440385489]
Large language models (LLMs) have brought substantial advancements in text generation, but their potential for enhancing classification tasks remains underexplored.
We propose a framework for thoroughly investigating fine-tuning LLMs for classification, including both generation- and encoding-based approaches.
We instantiate this framework in edit intent classification (EIC), a challenging and underexplored classification task.
arXiv Detail & Related papers (2024-10-02T20:48:28Z) - ClimateGPT: Towards AI Synthesizing Interdisciplinary Research on
Climate Change [21.827936253363603]
This paper introduces ClimateGPT, a model family of domain-specific large language models that synthesize interdisciplinary research on climate change.
We trained two 7B models from scratch on a science-oriented dataset of 300B tokens.
ClimateGPT-7B, 13B and 70B are continuously pre-trained from Llama2 on a domain-specific dataset of 4.2B tokens.
arXiv Detail & Related papers (2024-01-17T23:29:46Z) - Arabic Mini-ClimateGPT : A Climate Change and Sustainability Tailored
Arabic LLM [77.17254959695218]
Large Language Models (LLMs) like ChatGPT and Bard have shown impressive conversational abilities and excel in a wide variety of NLP tasks.
We propose a light-weight Arabic Mini-ClimateGPT that is built on an open-source LLM and is specifically fine-tuned on a conversational-style instruction tuning Arabic dataset Clima500-Instruct.
Our model surpasses the baseline LLM in 88.3% of cases during ChatGPT-based evaluation.
arXiv Detail & Related papers (2023-12-14T22:04:07Z) - ClimaX: A foundation model for weather and climate [51.208269971019504]
ClimaX is a deep learning model for weather and climate science.
It can be pre-trained with a self-supervised learning objective on climate datasets.
It can be fine-tuned to address a breadth of climate and weather tasks.
arXiv Detail & Related papers (2023-01-24T23:19:01Z) - Towards Answering Climate Questionnaires from Unstructured Climate
Reports [26.036105166376284]
Activists and policymakers need NLP tools to process the vast and rapidly growing unstructured textual climate reports into structured form.
We introduce two new large-scale climate questionnaire datasets and use their existing structure to train self-supervised models.
We then use these models to help align texts from unstructured climate documents to the semi-structured questionnaires in a human pilot study.
arXiv Detail & Related papers (2023-01-11T00:22:56Z) - GENEVA: Benchmarking Generalizability for Event Argument Extraction with
Hundreds of Event Types and Argument Roles [77.05288144035056]
Event Argument Extraction (EAE) has focused on improving model generalizability to cater to new events and domains.
Standard benchmarking datasets like ACE and ERE cover less than 40 event types and 25 entity-centric argument roles.
arXiv Detail & Related papers (2022-05-25T05:46:28Z) - Analyzing Sustainability Reports Using Natural Language Processing [68.8204255655161]
In recent years, companies have increasingly been aiming to both mitigate their environmental impact and adapt to the changing climate context.
This is reported via increasingly exhaustive reports, which cover many types of climate risks and exposures under the umbrella of Environmental, Social, and Governance (ESG)
We present this tool and the methodology that we used to develop it in the present article.
arXiv Detail & Related papers (2020-11-03T21:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.