Choosing a Model, Shaping a Future: Comparing LLM Perspectives on Sustainability and its Relationship with AI
- URL: http://arxiv.org/abs/2505.14435v1
- Date: Tue, 20 May 2025 14:41:56 GMT
- Title: Choosing a Model, Shaping a Future: Comparing LLM Perspectives on Sustainability and its Relationship with AI
- Authors: Annika Bush, Meltem Aksoy, Markus Pauly, Greta Ontrup,
- Abstract summary: This study systematically investigates how five state-of-the-art Large Language Models conceptualize sustainability and its relationship with AI.<n>We administered validated, sustainability-related questionnaires - each 100 times per model - to capture response patterns and variability.<n>Our results demonstrate that model selection could substantially influence organizational sustainability strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As organizations increasingly rely on AI systems for decision support in sustainability contexts, it becomes critical to understand the inherent biases and perspectives embedded in Large Language Models (LLMs). This study systematically investigates how five state-of-the-art LLMs -- Claude, DeepSeek, GPT, LLaMA, and Mistral - conceptualize sustainability and its relationship with AI. We administered validated, psychometric sustainability-related questionnaires - each 100 times per model -- to capture response patterns and variability. Our findings revealed significant inter-model differences: For example, GPT exhibited skepticism about the compatibility of AI and sustainability, whereas LLaMA demonstrated extreme techno-optimism with perfect scores for several Sustainable Development Goals (SDGs). Models also diverged in attributing institutional responsibility for AI and sustainability integration, a results that holds implications for technology governance approaches. Our results demonstrate that model selection could substantially influence organizational sustainability strategies, highlighting the need for awareness of model-specific biases when deploying LLMs for sustainability-related decision-making.
Related papers
- CTRLS: Chain-of-Thought Reasoning via Latent State-Transition [57.51370433303236]
Chain-of-thought (CoT) reasoning enables large language models to break down complex problems into interpretable intermediate steps.<n>We introduce groundingS, a framework that formulates CoT reasoning as a Markov decision process (MDP) with latent state transitions.<n>We show improvements in reasoning accuracy, diversity, and exploration efficiency across benchmark reasoning tasks.
arXiv Detail & Related papers (2025-07-10T21:32:18Z) - Perception, Reason, Think, and Plan: A Survey on Large Multimodal Reasoning Models [79.52467430114805]
Reasoning lies at the heart of intelligence, shaping the ability to make decisions, draw conclusions, and generalize across domains.<n>In artificial intelligence, as systems increasingly operate in open, uncertain, and multimodal environments, reasoning becomes essential for enabling robust and adaptive behavior.<n>Large Multimodal Reasoning Models (LMRMs) have emerged as a promising paradigm, integrating modalities such as text, images, audio, and video to support complex reasoning capabilities.
arXiv Detail & Related papers (2025-05-08T03:35:23Z) - The Convergent Ethics of AI? Analyzing Moral Foundation Priorities in Large Language Models with a Multi-Framework Approach [6.0972634521845475]
This paper introduces the Priorities in Reasoning and Intrinsic Moral Evaluation (PRIME) framework.<n>PRIME is a comprehensive methodology for analyzing moral priorities across foundational ethical dimensions.<n>We apply this framework to six leading large language models (LLMs) through a dual-protocol approach.
arXiv Detail & Related papers (2025-04-27T14:26:48Z) - Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice [57.94036023167952]
We argue that the efforts aiming to study AI's ethical ramifications should be made in tandem with those evaluating its impacts on the environment.<n>We propose best practices to better integrate AI ethics and sustainability in AI research and practice.
arXiv Detail & Related papers (2025-04-01T13:53:11Z) - A Survey on Post-training of Large Language Models [185.51013463503946]
Large Language Models (LLMs) have fundamentally transformed natural language processing, making them indispensable across domains ranging from conversational systems to scientific exploration.<n>These challenges necessitate advanced post-training language models (PoLMs) to address shortcomings, such as restricted reasoning capacities, ethical uncertainties, and suboptimal domain-specific performance.<n>This paper presents the first comprehensive survey of PoLMs, systematically tracing their evolution across five core paradigms.
arXiv Detail & Related papers (2025-03-08T05:41:42Z) - Developmental Support Approach to AI's Autonomous Growth: Toward the Realization of a Mutually Beneficial Stage Through Experiential Learning [0.0]
This study proposes an "AI Development Support" approach that supports the ethical development of AI itself.<n>We have constructed a learning framework based on a cycle of experience, introspection, analysis, and hypothesis formation.
arXiv Detail & Related papers (2025-02-27T06:12:20Z) - An Overview of Large Language Models for Statisticians [109.38601458831545]
Large Language Models (LLMs) have emerged as transformative tools in artificial intelligence (AI)<n>This paper explores potential areas where statisticians can make important contributions to the development of LLMs.<n>We focus on issues such as uncertainty quantification, interpretability, fairness, privacy, watermarking and model adaptation.
arXiv Detail & Related papers (2025-02-25T03:40:36Z) - Cooperate or Collapse: Emergence of Sustainable Cooperation in a Society of LLM Agents [101.17919953243107]
GovSim is a generative simulation platform designed to study strategic interactions and cooperative decision-making in large language models (LLMs)<n>We find that all but the most powerful LLM agents fail to achieve a sustainable equilibrium in GovSim, with the highest survival rate below 54%.<n>We show that agents that leverage "Universalization"-based reasoning, a theory of moral thinking, are able to achieve significantly better sustainability.
arXiv Detail & Related papers (2024-04-25T15:59:16Z) - Literature Review of Current Sustainability Assessment Frameworks and
Approaches for Organizations [10.045497511868172]
This systematic literature review explores sustainability assessment frameworks (SAFs) across diverse industries.
The review focuses on SAF design approaches including the methods used for Sustainability Indicator (SI) selection, relative importance assessment, and interdependency analysis.
arXiv Detail & Related papers (2024-03-07T18:14:52Z) - Assessing the Sustainability and Trustworthiness of Federated Learning Models [6.821579077084753]
The European Commission's AI-HLEG group has highlighted the importance of sustainable AI for trustworthy AI.<n>This work introduces the sustainability pillar to the trustworthy FL taxonomy, making this work the first to address all AI-HLEG requirements.<n>An algorithm is developed to evaluate the trustworthiness of FL models, incorporating sustainability considerations.
arXiv Detail & Related papers (2023-10-31T13:14:43Z) - Measuring and Improving Chain-of-Thought Reasoning in Vision-Language Models [61.28463542324576]
Vision-language models (VLMs) have recently demonstrated strong efficacy as visual assistants that can generate human-like outputs.
We evaluate existing state-of-the-art VLMs and find that even the best-performing model is unable to demonstrate strong visual reasoning capabilities and consistency.
We propose a two-stage training framework aimed at improving both the reasoning performance and consistency of VLMs.
arXiv Detail & Related papers (2023-09-08T17:49:44Z) - Broadening the perspective for sustainable AI: Comprehensive
sustainability criteria and indicators for AI systems [0.0]
This paper takes steps towards substantiating the call for an overarching perspective on "sustainable AI"
It presents the SCAIS Framework which contains a set 19 sustainability criteria for sustainable AI and 67 indicators.
arXiv Detail & Related papers (2023-06-22T18:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.