$EvoAl^{2048}$
- URL: http://arxiv.org/abs/2408.16780v1
- Date: Thu, 15 Aug 2024 21:06:18 GMT
- Title: $EvoAl^{2048}$
- Authors: Bernhard J. Berger, Christina Plump, Rolf Drechsler,
- Abstract summary: We report on applying a model-driven optimisation to search for an interpretable and explainable policy.
This paper describes a solution to the GECCO'24 Interpretable Control Competition using the open-source software EvoAl.
- Score: 2.5526759890882764
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: As AI solutions enter safety-critical products, the explainability and interpretability of solutions generated by AI products become increasingly important. In the long term, such explanations are the key to gaining users' acceptance of AI-based systems' decisions. We report on applying a model-driven-based optimisation to search for an interpretable and explainable policy that solves the game 2048. This paper describes a solution to the GECCO'24 Interpretable Control Competition using the open-source software EvoAl. We aimed to develop an approach for creating interpretable policies that are easy to adapt to new ideas.
Related papers
- AI Olympics challenge with Evolutionary Soft Actor Critic [5.076263094490715]
Our solution is based on a Model-free Deep Reinforcement Learning approach combined with an evolutionary strategy.
We will briefly describe the algorithms that have been used and then provide details of the approach.
arXiv Detail & Related papers (2024-09-02T09:34:18Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - PresAIse, A Prescriptive AI Solution for Enterprises [6.523929486550928]
This paper outlines an initiative from IBM Research, aiming to address some of these challenges by offering a suite of prescriptive AI solutions.
The solution suite includes scalable causal inference methods, interpretable decision-making approaches, and the integration of large language models.
A proof-of-concept, PresAIse, demonstrates the solutions' potential by enabling non-ML experts to interact with prescriptive AI models via a natural language interface.
arXiv Detail & Related papers (2024-02-03T03:23:08Z) - Emergent Explainability: Adding a causal chain to neural network
inference [0.0]
This position paper presents a theoretical framework for enhancing explainable artificial intelligence (xAI) through emergent communication (EmCom)
We explore the novel integration of EmCom into AI systems, offering a paradigm shift from conventional associative relationships between inputs and outputs to a more nuanced, causal interpretation.
The paper discusses the theoretical underpinnings of this approach, its potential broad applications, and its alignment with the growing need for responsible and transparent AI systems.
arXiv Detail & Related papers (2024-01-29T02:28:39Z) - IndigoVX: Where Human Intelligence Meets AI for Optimal Decision Making [0.0]
This paper defines a new approach for augmenting human intelligence with AI for optimal goal solving.
Our proposed AI, Indigo, is an acronym for Informed Numerical Decision-making through Iterative Goal-Oriented optimization.
We envisage this method being applied to games or business strategies, with the human providing strategic context and the AI offering optimal, data-driven moves.
arXiv Detail & Related papers (2023-07-21T11:54:53Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - End-to-End Learning and Intervention in Games [60.41921763076017]
We provide a unified framework for learning and intervention in games.
We propose two approaches, respectively based on explicit and implicit differentiation.
The analytical results are validated using several real-world problems.
arXiv Detail & Related papers (2020-10-26T18:39:32Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.