Literature Review of Current Sustainability Assessment Frameworks and
Approaches for Organizations
- URL: http://arxiv.org/abs/2403.04717v1
- Date: Thu, 7 Mar 2024 18:14:52 GMT
- Title: Literature Review of Current Sustainability Assessment Frameworks and
Approaches for Organizations
- Authors: Sarah Farahdel, Chun Wang, Anjali Awasthi
- Abstract summary: This systematic literature review explores sustainability assessment frameworks (SAFs) across diverse industries.
The review focuses on SAF design approaches including the methods used for Sustainability Indicator (SI) selection, relative importance assessment, and interdependency analysis.
- Score: 10.045497511868172
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This systematic literature review explores sustainability assessment
frameworks (SAFs) across diverse industries. The review focuses on SAF design
approaches including the methods used for Sustainability Indicator (SI)
selection, relative importance assessment, and interdependency analysis.
Various methods, including literature reviews, stakeholder interviews,
questionnaires, Pareto analysis, SMART approach, and adherence to
sustainability standards, contribute to the complex SI selection process.
Fuzzy-AHP stands out as a robust technique for assessing relative SI
importance. While dynamic sustainability and performance indices are essential,
methods like DEMATEL, VIKOR, correlation analysis, and causal models for
interdependency assessment exhibit static limitations. The review presents
strengths and limitations of SAFs, addressing gaps in design approaches and
contributing to a comprehensive understanding. The insights of this review aim
to benefit policymakers, administrators, leaders, and researchers, fostering
sustainability practices. Future research recommendations include exploring
multi-criteria decision-making models and hybrid approaches, extending
sustainability evaluation across organizational levels and supply chains.
Emphasizing adaptability to industry specifics and dynamic global adjustments
is proposed for holistic sustainability practices, further enhancing
organizational sustainability.
Related papers
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective [314.7991906491166]
Generative Foundation Models (GenFMs) have emerged as transformative tools.
Their widespread adoption raises critical concerns regarding trustworthiness across dimensions.
This paper presents a comprehensive framework to address these challenges through three key contributions.
arXiv Detail & Related papers (2025-02-20T06:20:36Z) - Towards Trustworthy Retrieval Augmented Generation for Large Language Models: A Survey [92.36487127683053]
Retrieval-Augmented Generation (RAG) is an advanced technique designed to address the challenges of Artificial Intelligence-Generated Content (AIGC)
RAG provides reliable and up-to-date external knowledge, reduces hallucinations, and ensures relevant context across a wide range of tasks.
Despite RAG's success and potential, recent studies have shown that the RAG paradigm also introduces new risks, including privacy concerns, adversarial attacks, and accountability issues.
arXiv Detail & Related papers (2025-02-08T06:50:47Z) - Using Sustainability Impact Scores for Software Architecture Evaluation [5.33605239628904]
We present an improved version of the Sustainability Impact Score (SIS)
The SIS facilitates the identification and quantification of trade-offs in terms of their sustainability impact.
Our study reveals that technical quality concerns have significant, often unrecognized impacts across sustainability dimensions.
arXiv Detail & Related papers (2025-01-28T15:00:45Z) - Evaluating the Consistency of LLM Evaluators [9.53888551630878]
Large language models (LLMs) have shown potential as general evaluators.
consistency as evaluators is still understudied, raising concerns about the reliability of LLM evaluators.
arXiv Detail & Related papers (2024-11-30T17:29:08Z) - Advancing Sustainability via Recommender Systems: A Survey [23.364932316026973]
Human behavioral patterns and consumption paradigms have emerged as pivotal determinants in environmental degradation and climate change.
There exists an imperative need for sustainable recommender systems that incorporate sustainability principles to foster eco-conscious and socially responsible choices.
This comprehensive survey addresses this critical research gap by presenting a systematic analysis of sustainable recommender systems.
arXiv Detail & Related papers (2024-11-12T09:19:32Z) - LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models [75.89014602596673]
Strategic reasoning requires understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly.
We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with Large Language Models.
It underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Leveraging Large Language Models for NLG Evaluation: Advances and Challenges [57.88520765782177]
Large Language Models (LLMs) have opened new avenues for assessing generated content quality, e.g., coherence, creativity, and context relevance.
We propose a coherent taxonomy for organizing existing LLM-based evaluation metrics, offering a structured framework to understand and compare these methods.
By discussing unresolved challenges, including bias, robustness, domain-specificity, and unified evaluation, this paper seeks to offer insights to researchers and advocate for fairer and more advanced NLG evaluation techniques.
arXiv Detail & Related papers (2024-01-13T15:59:09Z) - Assessing the Sustainability and Trustworthiness of Federated Learning Models [6.821579077084753]
The European Commission's AI-HLEG group has highlighted the importance of sustainable AI for trustworthy AI.
This work introduces the sustainability pillar to the trustworthy FL taxonomy, making this work the first to address all AI-HLEG requirements.
An algorithm is developed to evaluate the trustworthiness of FL models, incorporating sustainability considerations.
arXiv Detail & Related papers (2023-10-31T13:14:43Z) - A Survey on Interpretable Cross-modal Reasoning [64.37362731950843]
Cross-modal reasoning (CMR) has emerged as a pivotal area with applications spanning from multimedia analysis to healthcare diagnostics.
This survey delves into the realm of interpretable cross-modal reasoning (I-CMR)
This survey presents a comprehensive overview of the typical methods with a three-level taxonomy for I-CMR.
arXiv Detail & Related papers (2023-09-05T05:06:48Z) - Broadening the perspective for sustainable AI: Comprehensive
sustainability criteria and indicators for AI systems [0.0]
This paper takes steps towards substantiating the call for an overarching perspective on "sustainable AI"
It presents the SCAIS Framework which contains a set 19 sustainability criteria for sustainable AI and 67 indicators.
arXiv Detail & Related papers (2023-06-22T18:00:55Z) - Rethinking Model Evaluation as Narrowing the Socio-Technical Gap [47.632123167141245]
We argue that model evaluation practices must take on a critical task to cope with the challenges and responsibilities brought by this homogenization.
We urge the community to develop evaluation methods based on real-world contexts and human requirements.
arXiv Detail & Related papers (2023-06-01T00:01:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.