Explainable Natural Language Processing for Corporate Sustainability Analysis
- URL: http://arxiv.org/abs/2407.17487v3
- Date: Wed, 16 Oct 2024 04:24:59 GMT
- Title: Explainable Natural Language Processing for Corporate Sustainability Analysis
- Authors: Keane Ong, Rui Mao, Ranjan Satapathy, Ricardo Shirota Filho, Erik Cambria, Johan Sulaeman, Gianmarco Mengaldo,
- Abstract summary: The concept of corporate sustainability is complex due to the diverse and intricate nature of firm operations.
Corporate sustainability assessments are plagued by subjectivity both within data that reflect corporate sustainability efforts and the analysts evaluating them.
We argue that Explainable Natural Language Processing (XNLP) can significantly enhance corporate sustainability analysis.
- Score: 26.267508407180465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sustainability commonly refers to entities, such as individuals, companies, and institutions, having a non-detrimental (or even positive) impact on the environment, society, and the economy. With sustainability becoming a synonym of acceptable and legitimate behaviour, it is being increasingly demanded and regulated. Several frameworks and standards have been proposed to measure the sustainability impact of corporations, including United Nations' sustainable development goals and the recently introduced global sustainability reporting framework, amongst others. However, the concept of corporate sustainability is complex due to the diverse and intricate nature of firm operations (i.e. geography, size, business activities, interlinks with other stakeholders). As a result, corporate sustainability assessments are plagued by subjectivity both within data that reflect corporate sustainability efforts (i.e. corporate sustainability disclosures) and the analysts evaluating them. This subjectivity can be distilled into distinct challenges, such as incompleteness, ambiguity, unreliability and sophistication on the data dimension, as well as limited resources and potential bias on the analyst dimension. Put together, subjectivity hinders effective cost attribution to entities non-compliant with prevailing sustainability expectations, potentially rendering sustainability efforts and its associated regulations futile. To this end, we argue that Explainable Natural Language Processing (XNLP) can significantly enhance corporate sustainability analysis. Specifically, linguistic understanding algorithms (lexical, semantic, syntactic), integrated with XAI capabilities (interpretability, explainability, faithfulness), can bridge gaps in analyst resources and mitigate subjectivity problems within data.
Related papers
- Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents [101.17919953243107]
GovSim is a generative simulation platform designed to study strategic interactions and cooperative decision-making in large language models (LLMs)
We find that all but the most powerful LLM agents fail to achieve a sustainable equilibrium in GovSim, with the highest survival rate below 54%.
We show that agents that leverage "Universalization"-based reasoning, a theory of moral thinking, are able to achieve significantly better sustainability.
arXiv Detail & Related papers (2024-04-25T15:59:16Z) - Balancing Progress and Responsibility: A Synthesis of Sustainability Trade-Offs of AI-Based Systems [8.807173854357597]
We aim to synthesize trade-offs related to sustainability in the context of integrating AI into software systems.
The study was conducted in collaboration with a Dutch financial organization.
arXiv Detail & Related papers (2024-04-05T10:11:08Z) - Literature Review of Current Sustainability Assessment Frameworks and
Approaches for Organizations [10.045497511868172]
This systematic literature review explores sustainability assessment frameworks (SAFs) across diverse industries.
The review focuses on SAF design approaches including the methods used for Sustainability Indicator (SI) selection, relative importance assessment, and interdependency analysis.
arXiv Detail & Related papers (2024-03-07T18:14:52Z) - Evaluating and Improving Continual Learning in Spoken Language
Understanding [58.723320551761525]
We propose an evaluation methodology that provides a unified evaluation on stability, plasticity, and generalizability in continual learning.
By employing the proposed metric, we demonstrate how introducing various knowledge distillations can improve different aspects of these three properties of the SLU model.
arXiv Detail & Related papers (2024-02-16T03:30:27Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - CHATREPORT: Democratizing Sustainability Disclosure Analysis through
LLM-based Tools [10.653984116770234]
ChatReport is a novel LLM-based system to automate the analysis of corporate sustainability reports.
We make our methodology, annotated datasets, and generated analyses of 1015 reports publicly available.
arXiv Detail & Related papers (2023-07-28T18:58:16Z) - Broadening the perspective for sustainable AI: Comprehensive
sustainability criteria and indicators for AI systems [0.0]
This paper takes steps towards substantiating the call for an overarching perspective on "sustainable AI"
It presents the SCAIS Framework which contains a set 19 sustainability criteria for sustainable AI and 67 indicators.
arXiv Detail & Related papers (2023-06-22T18:00:55Z) - On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model,
Data, and Training [109.9218185711916]
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind social media texts or reviews.
We propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training.
arXiv Detail & Related papers (2023-04-19T11:07:43Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Strategic alignment between IT flexibility and dynamic capabilities: an
empirical investigation [0.0]
This paper develops a strategic alignment model for IT flexibility and dynamic capabilities.
It empirically validates proposed hypotheses using correlation and regression analyses on a large data sample of 322 international firms.
arXiv Detail & Related papers (2021-05-18T10:37:33Z) - The Curse of Performance Instability in Analysis Datasets: Consequences,
Source, and Suggestions [93.62888099134028]
We find that the performance of state-of-the-art models on Natural Language Inference (NLI) and Reading (RC) analysis/stress sets can be highly unstable.
This raises three questions: (1) How will the instability affect the reliability of the conclusions drawn based on these analysis sets?
We give both theoretical explanations and empirical evidence regarding the source of the instability.
arXiv Detail & Related papers (2020-04-28T15:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.