Interpretable deep-learning models to help achieve the Sustainable
Development Goals
- URL: http://arxiv.org/abs/2108.10744v1
- Date: Tue, 24 Aug 2021 13:56:15 GMT
- Title: Interpretable deep-learning models to help achieve the Sustainable
Development Goals
- Authors: Ricardo Vinuesa, Beril Sirmacek
- Abstract summary: We discuss our insights into interpretable artificial-intelligence (AI) models, and how they are essential in the context of developing ethical AI systems.
We highlight the potential of extracting truly-interpretable models from deep-learning methods, for instance via symbolic models obtained through inductive biases.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We discuss our insights into interpretable artificial-intelligence (AI)
models, and how they are essential in the context of developing ethical AI
systems, as well as data-driven solutions compliant with the Sustainable
Development Goals (SDGs). We highlight the potential of extracting
truly-interpretable models from deep-learning methods, for instance via
symbolic models obtained through inductive biases, to ensure a sustainable
development of AI.
Related papers
- World Models for Cognitive Agents: Transforming Edge Intelligence in Future Networks [55.90051810762702]
We present a comprehensive overview of world models, highlighting their architecture, training paradigms, and applications across prediction, generation, planning, and causal reasoning.<n>We propose Wireless Dreamer, a novel world model-based reinforcement learning framework tailored for wireless edge intelligence optimization.
arXiv Detail & Related papers (2025-05-31T06:43:00Z) - Choosing a Model, Shaping a Future: Comparing LLM Perspectives on Sustainability and its Relationship with AI [0.0]
This study systematically investigates how five state-of-the-art Large Language Models conceptualize sustainability and its relationship with AI.<n>We administered validated, sustainability-related questionnaires - each 100 times per model - to capture response patterns and variability.<n>Our results demonstrate that model selection could substantially influence organizational sustainability strategies.
arXiv Detail & Related papers (2025-05-20T14:41:56Z) - Fostering Self-Directed Growth with Generative AI: Toward a New Learning Analytics Framework [0.0]
This study introduces a novel conceptual framework integrating Generative Artificial Intelligence and Learning Analytics to cultivate Self-Directed Growth.
A2PL model reconceptualizes the interplay of learner aspirations, complex thinking, and self-assessment within GAI supported environments.
arXiv Detail & Related papers (2025-04-29T15:19:48Z) - Developmental Support Approach to AI's Autonomous Growth: Toward the Realization of a Mutually Beneficial Stage Through Experiential Learning [0.0]
This study proposes an "AI Development Support" approach that supports the ethical development of AI itself.
We have constructed a learning framework based on a cycle of experience, introspection, analysis, and hypothesis formation.
arXiv Detail & Related papers (2025-02-27T06:12:20Z) - Data and System Perspectives of Sustainable Artificial Intelligence [43.21672481390316]
Sustainable AI is a subfield of AI for aiming to reduce environmental impact and achieve sustainability.
In this article, we discuss current issues, opportunities and example solutions for addressing these issues.
arXiv Detail & Related papers (2025-01-13T17:04:23Z) - On the Modeling Capabilities of Large Language Models for Sequential Decision Making [52.128546842746246]
Large pretrained models are showing increasingly better performance in reasoning and planning tasks.
We evaluate their ability to produce decision-making policies, either directly, by generating actions, or indirectly.
In environments with unfamiliar dynamics, we explore how fine-tuning LLMs with synthetic data can significantly improve their reward modeling capabilities.
arXiv Detail & Related papers (2024-10-08T03:12:57Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Explanation, Debate, Align: A Weak-to-Strong Framework for Language Model Generalization [0.6629765271909505]
This paper introduces a novel approach to model alignment through weak-to-strong generalization in the context of language models.
Our results suggest that this facilitation-based approach not only enhances model performance but also provides insights into the nature of model alignment.
arXiv Detail & Related papers (2024-09-11T15:16:25Z) - Explainability Paths for Sustained Artistic Practice with AI [0.0]
We explore several paths to improve explainability, drawing primarily from our research-creation practice in training and implementing generative audio models.
We highlight human agency over training materials, the viability of small-scale datasets, the facilitation of the iterative creative process, and the integration of interactive machine learning as a mapping tool.
Importantly, these steps aim to enhance human agency over generative AI systems not only during model inference, but also when curating and preprocessing training data as well as during the training phase of models.
arXiv Detail & Related papers (2024-07-21T16:48:14Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - On the Challenges and Opportunities in Generative AI [135.2754367149689]
We argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains.
In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - A Vision for Operationalising Diversity and Inclusion in AI [5.4897262701261225]
This study seeks to envision the operationalization of the ethical imperatives of diversity and inclusion (D&I) within AI ecosystems.
A significant challenge in AI development is the effective operationalization of D&I principles.
This paper proposes a vision of a framework for developing a tool utilizing persona-based simulation by Generative AI (GenAI)
arXiv Detail & Related papers (2023-12-11T02:44:39Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Sustainable Artificial Intelligence through Continual Learning [4.243356707599486]
We identify Continual Learning as a promising approach towards the design of systems compliant with the Sustainable AI principles.
While Sustainable AI outlines general desiderata for ethical applications, Continual Learning provides means to put such desiderata into practice.
arXiv Detail & Related papers (2021-11-17T22:43:13Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.