Paradigm Shift in Sustainability Disclosure Analysis: Empowering
Stakeholders with CHATREPORT, a Language Model-Based Tool
- URL: http://arxiv.org/abs/2306.15518v2
- Date: Thu, 16 Nov 2023 07:59:08 GMT
- Title: Paradigm Shift in Sustainability Disclosure Analysis: Empowering
Stakeholders with CHATREPORT, a Language Model-Based Tool
- Authors: Jingwei Ni, Julia Bingler, Chiara Colesanti-Senni, Mathias Kraus, Glen
Gostlow, Tobias Schimanski, Dominik Stammbach, Saeid Ashraf Vaghefi, Qian
Wang, Nicolas Webersinke, Tobias Wekhof, Tingyu Yu, Markus Leippold
- Abstract summary: This paper introduces a novel approach to enhance Large Language Models (LLMs) with expert knowledge to automate the analysis of corporate sustainability reports.
We christen our tool CHATREPORT, and apply it in a first use case to assess corporate climate risk disclosures.
- Score: 10.653984116770234
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This paper introduces a novel approach to enhance Large Language Models
(LLMs) with expert knowledge to automate the analysis of corporate
sustainability reports by benchmarking them against the Task Force for
Climate-Related Financial Disclosures (TCFD) recommendations. Corporate
sustainability reports are crucial in assessing organizations' environmental
and social risks and impacts. However, analyzing these reports' vast amounts of
information makes human analysis often too costly. As a result, only a few
entities worldwide have the resources to analyze these reports, which could
lead to a lack of transparency. While AI-powered tools can automatically
analyze the data, they are prone to inaccuracies as they lack domain-specific
expertise. This paper introduces a novel approach to enhance LLMs with expert
knowledge to automate the analysis of corporate sustainability reports. We
christen our tool CHATREPORT, and apply it in a first use case to assess
corporate climate risk disclosures following the TCFD recommendations.
CHATREPORT results from collaborating with experts in climate science, finance,
economic policy, and computer science, demonstrating how domain experts can be
involved in developing AI tools. We make our prompt templates, generated data,
and scores available to the public to encourage transparency.
Related papers
- The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources [100.23208165760114]
Foundation model development attracts a rapidly expanding body of contributors, scientists, and applications.
To help shape responsible development practices, we introduce the Foundation Model Development Cheatsheet.
arXiv Detail & Related papers (2024-06-24T15:55:49Z) - Assessing the Potential of AI for Spatially Sensitive Nature-Related Financial Risks [0.0]
This report presents potential AI solutions for models of two distinct use cases, the Brazil Beef Supply Use Case and the Water Utility Use Case.
The Brazilian cattle farming use case is an example of greening finance - integrating nature-related considerations into mainstream financial decision-making.
The deployment of nature-based solutions in the UK water utility use case is an example of financing green - driving investment to nature-positive outcomes.
arXiv Detail & Related papers (2024-04-26T12:42:39Z) - A Hypothesis on Good Practices for AI-based Systems for Financial Time
Series Forecasting: Towards Domain-Driven XAI Methods [0.0]
Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks.
These models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance.
This paper explores good practices for deploying explainability in AI-based systems for finance.
arXiv Detail & Related papers (2023-11-13T17:56:45Z) - Glitter or Gold? Deriving Structured Insights from Sustainability
Reports via Large Language Models [16.231171704561714]
This study uses Information Extraction (IE) methods to extract structured insights related to ESG aspects from companies' sustainability reports.
We then leverage graph-based representations to conduct statistical analyses concerning the extracted insights.
arXiv Detail & Related papers (2023-10-09T11:34:41Z) - CHATREPORT: Democratizing Sustainability Disclosure Analysis through
LLM-based Tools [10.653984116770234]
ChatReport is a novel LLM-based system to automate the analysis of corporate sustainability reports.
We make our methodology, annotated datasets, and generated analyses of 1015 reports publicly available.
arXiv Detail & Related papers (2023-07-28T18:58:16Z) - Bring Your Own Data! Self-Supervised Evaluation for Large Language
Models [52.15056231665816]
We propose a framework for self-supervised evaluation of Large Language Models (LLMs)
We demonstrate self-supervised evaluation strategies for measuring closed-book knowledge, toxicity, and long-range context dependence.
We find strong correlations between self-supervised and human-supervised evaluations.
arXiv Detail & Related papers (2023-06-23T17:59:09Z) - Enabling and Analyzing How to Efficiently Extract Information from
Hybrid Long Documents with LLMs [48.87627426640621]
This research focuses on harnessing the potential of Large Language Models to comprehend critical information from financial reports.
We propose an Automated Financial Information Extraction framework that enhances LLMs' ability to comprehend and extract information from financial reports.
Our framework is effectively validated on GPT-3.5 and GPT-4, yielding average accuracy increases of 53.94% and 33.77%, respectively.
arXiv Detail & Related papers (2023-05-24T10:35:58Z) - SmartBook: AI-Assisted Situation Report Generation for Intelligence Analysts [55.73424958012229]
This work identifies intelligence analysts' practices and preferences for AI assistance in situation report generation.
We introduce SmartBook, an automated framework designed to generate situation reports from large volumes of news data.
Our comprehensive evaluation of SmartBook, encompassing a user study alongside a content review with an editing study, reveals SmartBook's effectiveness in generating accurate and relevant situation reports.
arXiv Detail & Related papers (2023-03-25T03:03:00Z) - Explainable Patterns: Going from Findings to Insights to Support Data
Analytics Democratization [60.18814584837969]
We present Explainable Patterns (ExPatt), a new framework to support lay users in exploring and creating data storytellings.
ExPatt automatically generates plausible explanations for observed or selected findings using an external (textual) source of information.
arXiv Detail & Related papers (2021-01-19T16:13:44Z) - Analyzing Sustainability Reports Using Natural Language Processing [68.8204255655161]
In recent years, companies have increasingly been aiming to both mitigate their environmental impact and adapt to the changing climate context.
This is reported via increasingly exhaustive reports, which cover many types of climate risks and exposures under the umbrella of Environmental, Social, and Governance (ESG)
We present this tool and the methodology that we used to develop it in the present article.
arXiv Detail & Related papers (2020-11-03T21:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.