Explainable Natural Language Processing for Corporate Sustainability Analysis
- URL: http://arxiv.org/abs/2407.17487v3
- Date: Wed, 16 Oct 2024 04:24:59 GMT
- Title: Explainable Natural Language Processing for Corporate Sustainability Analysis
- Authors: Keane Ong, Rui Mao, Ranjan Satapathy, Ricardo Shirota Filho, Erik Cambria, Johan Sulaeman, Gianmarco Mengaldo,
- Abstract summary: The concept of corporate sustainability is complex due to the diverse and intricate nature of firm operations.
Corporate sustainability assessments are plagued by subjectivity both within data that reflect corporate sustainability efforts and the analysts evaluating them.
We argue that Explainable Natural Language Processing (XNLP) can significantly enhance corporate sustainability analysis.
- Score: 26.267508407180465
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sustainability commonly refers to entities, such as individuals, companies, and institutions, having a non-detrimental (or even positive) impact on the environment, society, and the economy. With sustainability becoming a synonym of acceptable and legitimate behaviour, it is being increasingly demanded and regulated. Several frameworks and standards have been proposed to measure the sustainability impact of corporations, including United Nations' sustainable development goals and the recently introduced global sustainability reporting framework, amongst others. However, the concept of corporate sustainability is complex due to the diverse and intricate nature of firm operations (i.e. geography, size, business activities, interlinks with other stakeholders). As a result, corporate sustainability assessments are plagued by subjectivity both within data that reflect corporate sustainability efforts (i.e. corporate sustainability disclosures) and the analysts evaluating them. This subjectivity can be distilled into distinct challenges, such as incompleteness, ambiguity, unreliability and sophistication on the data dimension, as well as limited resources and potential bias on the analyst dimension. Put together, subjectivity hinders effective cost attribution to entities non-compliant with prevailing sustainability expectations, potentially rendering sustainability efforts and its associated regulations futile. To this end, we argue that Explainable Natural Language Processing (XNLP) can significantly enhance corporate sustainability analysis. Specifically, linguistic understanding algorithms (lexical, semantic, syntactic), integrated with XAI capabilities (interpretability, explainability, faithfulness), can bridge gaps in analyst resources and mitigate subjectivity problems within data.
Related papers
- Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice [57.94036023167952]
We argue that the efforts aiming to study AI's ethical ramifications should be made in tandem with those evaluating its impacts on the environment.
We propose best practices to better integrate AI ethics and sustainability in AI research and practice.
arXiv Detail & Related papers (2025-04-01T13:53:11Z) - REVAL: A Comprehension Evaluation on Reliability and Values of Large Vision-Language Models [59.445672459851274]
REVAL is a comprehensive benchmark designed to evaluate the textbfREliability and textbfVALue of Large Vision-Language Models.
REVAL encompasses over 144K image-text Visual Question Answering (VQA) samples, structured into two primary sections: Reliability and Values.
We evaluate 26 models, including mainstream open-source LVLMs and prominent closed-source models like GPT-4o and Gemini-1.5-Pro.
arXiv Detail & Related papers (2025-03-20T07:54:35Z) - Identifying Trustworthiness Challenges in Deep Learning Models for Continental-Scale Water Quality Prediction [64.4881275941927]
We present the first comprehensive evaluation of trustworthiness in a continental-scale multi-task LSTM model.
Our investigation uncovers systematic patterns of model performance disparities linked to basin characteristics.
This work serves as a timely call to action for advancing trustworthy data-driven methods for water resources management.
arXiv Detail & Related papers (2025-03-13T01:50:50Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)
This paper explores potential areas where statisticians can make important contributions to the development of LLMs.
We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective [333.9220561243189]
Generative Foundation Models (GenFMs) have emerged as transformative tools.
Their widespread adoption raises critical concerns regarding trustworthiness across dimensions.
This paper presents a comprehensive framework to address these challenges through three key contributions.
arXiv Detail & Related papers (2025-02-20T06:20:36Z) - Using Sustainability Impact Scores for Software Architecture Evaluation [5.33605239628904]
We present an improved version of the Sustainability Impact Score (SIS)
The SIS facilitates the identification and quantification of trade-offs in terms of their sustainability impact.
Our study reveals that technical quality concerns have significant, often unrecognized impacts across sustainability dimensions.
arXiv Detail & Related papers (2025-01-28T15:00:45Z) - ESGSenticNet: A Neurosymbolic Knowledge Base for Corporate Sustainability Analysis [26.738671295538396]
We introduce ESGSenticNet, a knowledge base for sustainability analysis.
ESGSenticNet is constructed from a neurosymbolic framework that integrates specialised concept parsing, GPT-4o inference, and semi-supervised label propagation.
Experiments indicate that ESGSenticNet, when deployed as a lexical method, more effectively captures relevant and actionable sustainability information.
arXiv Detail & Related papers (2025-01-27T01:21:12Z) - Twin Transition or Competing Interests? Validation of the Artificial Intelligence and Sustainability Perceptions Inventory (AISPI) [0.0]
This paper presents the development and validation of the Artificial Intelligence and Sustainability Perceptions Inventory (AISPI)
The 13-item instrument measures how individuals view the relationship between AI advancement and environmental sustainability.
Our findings suggest that individuals can simultaneously recognize both synergies and tensions in the AI-sustainability relationship.
arXiv Detail & Related papers (2025-01-26T16:21:27Z) - Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents [101.17919953243107]
GovSim is a generative simulation platform designed to study strategic interactions and cooperative decision-making in large language models (LLMs)
We find that all but the most powerful LLM agents fail to achieve a sustainable equilibrium in GovSim, with the highest survival rate below 54%.
We show that agents that leverage "Universalization"-based reasoning, a theory of moral thinking, are able to achieve significantly better sustainability.
arXiv Detail & Related papers (2024-04-25T15:59:16Z) - Balancing Progress and Responsibility: A Synthesis of Sustainability Trade-Offs of AI-Based Systems [8.807173854357597]
We aim to synthesize trade-offs related to sustainability in the context of integrating AI into software systems.
The study was conducted in collaboration with a Dutch financial organization.
arXiv Detail & Related papers (2024-04-05T10:11:08Z) - Literature Review of Current Sustainability Assessment Frameworks and
Approaches for Organizations [10.045497511868172]
This systematic literature review explores sustainability assessment frameworks (SAFs) across diverse industries.
The review focuses on SAF design approaches including the methods used for Sustainability Indicator (SI) selection, relative importance assessment, and interdependency analysis.
arXiv Detail & Related papers (2024-03-07T18:14:52Z) - Evaluating and Improving Continual Learning in Spoken Language
Understanding [58.723320551761525]
We propose an evaluation methodology that provides a unified evaluation on stability, plasticity, and generalizability in continual learning.
By employing the proposed metric, we demonstrate how introducing various knowledge distillations can improve different aspects of these three properties of the SLU model.
arXiv Detail & Related papers (2024-02-16T03:30:27Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - CHATREPORT: Democratizing Sustainability Disclosure Analysis through
LLM-based Tools [10.653984116770234]
ChatReport is a novel LLM-based system to automate the analysis of corporate sustainability reports.
We make our methodology, annotated datasets, and generated analyses of 1015 reports publicly available.
arXiv Detail & Related papers (2023-07-28T18:58:16Z) - Broadening the perspective for sustainable AI: Comprehensive
sustainability criteria and indicators for AI systems [0.0]
This paper takes steps towards substantiating the call for an overarching perspective on "sustainable AI"
It presents the SCAIS Framework which contains a set 19 sustainability criteria for sustainable AI and 67 indicators.
arXiv Detail & Related papers (2023-06-22T18:00:55Z) - On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model,
Data, and Training [109.9218185711916]
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind social media texts or reviews.
We propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training.
arXiv Detail & Related papers (2023-04-19T11:07:43Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Strategic alignment between IT flexibility and dynamic capabilities: an
empirical investigation [0.0]
This paper develops a strategic alignment model for IT flexibility and dynamic capabilities.
It empirically validates proposed hypotheses using correlation and regression analyses on a large data sample of 322 international firms.
arXiv Detail & Related papers (2021-05-18T10:37:33Z) - The Curse of Performance Instability in Analysis Datasets: Consequences,
Source, and Suggestions [93.62888099134028]
We find that the performance of state-of-the-art models on Natural Language Inference (NLI) and Reading (RC) analysis/stress sets can be highly unstable.
This raises three questions: (1) How will the instability affect the reliability of the conclusions drawn based on these analysis sets?
We give both theoretical explanations and empirical evidence regarding the source of the instability.
arXiv Detail & Related papers (2020-04-28T15:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.