Chitchat with AI: Understand the supply chain carbon disclosure of companies worldwide through Large Language Model
- URL: http://arxiv.org/abs/2511.00024v1
- Date: Sun, 26 Oct 2025 01:06:18 GMT
- Title: Chitchat with AI: Understand the supply chain carbon disclosure of companies worldwide through Large Language Model
- Authors: Haotian Hang, Yueyang Shen, Vicky Zhu, Jose Cruz, Michelle Li,
- Abstract summary: The Carbon Disclosure Project (CDP) hosts the world's largest longitudinal dataset of climate-related survey responses.<n>This paper proposes a novel decision-support framework that leverages large language models (LLMs) to assess corporate climate disclosure quality at scale.
- Score: 0.998186654331176
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the context of global sustainability mandates, corporate carbon disclosure has emerged as a critical mechanism for aligning business strategy with environmental responsibility. The Carbon Disclosure Project (CDP) hosts the world's largest longitudinal dataset of climate-related survey responses, combining structured indicators with open-ended narratives, but the heterogeneity and free-form nature of these disclosures present significant analytical challenges for benchmarking, compliance monitoring, and investment screening. This paper proposes a novel decision-support framework that leverages large language models (LLMs) to assess corporate climate disclosure quality at scale. It develops a master rubric that harmonizes narrative scoring across 11 years of CDP data (2010-2020), enabling cross-sector and cross-country benchmarking. By integrating rubric-guided scoring with percentile-based normalization, our method identifies temporal trends, strategic alignment patterns, and inconsistencies in disclosure across industries and regions. Results reveal that sectors such as technology and countries like Germany consistently demonstrate higher rubric alignment, while others exhibit volatility or superficial engagement, offering insights that inform key decision-making processes for investors, regulators, and corporate environmental, social, and governance (ESG) strategists. The proposed LLM-based approach transforms unstructured disclosures into quantifiable, interpretable, comparable, and actionable intelligence, advancing the capabilities of AI-enabled decision support systems (DSSs) in the domain of climate governance.
Related papers
- Quantifying Climate Policy Action and Its Links to Development Outcomes: A Cross-National Data-Driven Analysis [0.0]
We develop a quantitative indicator of climate policy orientation by applying a multilingual transformer-based language model to official national policy documents.<n> Linking these indicators with World Bank development data in panel regressions reveals that mitigation policies are associated with higher GDP and GNI.<n>Disaster risk management correlates with greater GNI but reduced foreign direct investment; adaptation and loss and damage show limited measurable effects.
arXiv Detail & Related papers (2025-10-20T11:12:30Z) - From Vision to Validation: A Theory- and Data-Driven Construction of a GCC-Specific AI Adoption Index [0.21485350418225244]
This study employs a theory-driven foundation derived from an in-depth analysis of literature review and six National AI Strategies (NASs)<n>The research develops and validates a novel AI Adoption Index specifically tailored to the Gulf Cooperation Council (GCC) public sector.<n>Findings indicate that robust technical infrastructure and clear policy mandates exert the strongest influence on successful AI implementations.
arXiv Detail & Related papers (2025-09-05T20:06:57Z) - Policy-Driven AI in Dataspaces: Taxonomy, Explainability, and Pathways for Compliant Innovation [1.6766200616088744]
This paper provides a comprehensive review of privacy-preserving and policy-aware AI techniques.<n>We propose a novel taxonomy to classify these techniques based on privacy levels, impacts, and compliance complexity.<n>By technical, ethical, and regulatory perspectives, this work lays the groundwork for developing trustworthy, efficient, and compliant AI systems in dataspaces.
arXiv Detail & Related papers (2025-07-26T17:07:01Z) - Anomaly Detection and Generation with Diffusion Models: A Survey [51.61574868316922]
Anomaly detection (AD) plays a pivotal role across diverse domains, including cybersecurity, finance, healthcare, and industrial manufacturing.<n>Recent advancements in deep learning, specifically diffusion models (DMs), have sparked significant interest.<n>This survey aims to guide researchers and practitioners in leveraging DMs for innovative AD solutions across diverse applications.
arXiv Detail & Related papers (2025-06-11T03:29:18Z) - A Survey on Post-training of Large Language Models [185.51013463503946]
Large Language Models (LLMs) have fundamentally transformed natural language processing, making them indispensable across domains ranging from conversational systems to scientific exploration.<n>These challenges necessitate advanced post-training language models (PoLMs) to address shortcomings, such as restricted reasoning capacities, ethical uncertainties, and suboptimal domain-specific performance.<n>This paper presents the first comprehensive survey of PoLMs, systematically tracing their evolution across five core paradigms: Fine-tuning, which enhances task-specific accuracy; Alignment, which ensures ethical coherence and alignment with human preferences; Reasoning, which advances multi-step inference despite challenges in reward design; Integration and Adaptation, which
arXiv Detail & Related papers (2025-03-08T05:41:42Z) - Optimizing Large Language Models for ESG Activity Detection in Financial Texts [0.7373617024876725]
This paper investigates the ability of current-generation Large Language Models to identify text related to environmental activities.<n>We introduce ESG-Activities, a benchmark dataset containing 1,325 labelled text segments classified according to the EU ESG taxonomy.<n>Our experimental results show that fine-tuning on ESG-Activities significantly enhances classification accuracy.
arXiv Detail & Related papers (2025-02-28T14:52:25Z) - AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons [62.374792825813394]
This paper introduces AILuminate v1.0, the first comprehensive industry-standard benchmark for assessing AI-product risk and reliability.<n>The benchmark evaluates an AI system's resistance to prompts designed to elicit dangerous, illegal, or undesirable behavior in 12 hazard categories.
arXiv Detail & Related papers (2025-02-19T05:58:52Z) - LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - CoUDA: Coherence Evaluation via Unified Data Augmentation [49.37157483044349]
Coherence evaluation aims to assess the organization and structure of a discourse.
We take inspiration from linguistic theory of discourse structure, and propose a data augmentation framework named CoUDA.
With only 233M parameters, CoUDA achieves state-of-the-art performance in both pointwise scoring and pairwise ranking tasks.
arXiv Detail & Related papers (2024-03-31T13:19:36Z) - Glitter or Gold? Deriving Structured Insights from Sustainability
Reports via Large Language Models [16.231171704561714]
This study uses Information Extraction (IE) methods to extract structured insights related to ESG aspects from companies' sustainability reports.
We then leverage graph-based representations to conduct statistical analyses concerning the extracted insights.
arXiv Detail & Related papers (2023-10-09T11:34:41Z) - Reinforcement Learning with Heterogeneous Data: Estimation and Inference [84.72174994749305]
We introduce the K-Heterogeneous Markov Decision Process (K-Hetero MDP) to address sequential decision problems with population heterogeneity.
We propose the Auto-Clustered Policy Evaluation (ACPE) for estimating the value of a given policy, and the Auto-Clustered Policy Iteration (ACPI) for estimating the optimal policy in a given policy class.
We present simulations to support our theoretical findings, and we conduct an empirical study on the standard MIMIC-III dataset.
arXiv Detail & Related papers (2022-01-31T20:58:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.