Understanding in Artificial Intelligence
- URL: http://arxiv.org/abs/2101.06573v1
- Date: Sun, 17 Jan 2021 02:29:50 GMT
- Title: Understanding in Artificial Intelligence
- Authors: Stefan Maetschke and David Martinez Iraola and Pieter Barnard and
Elaheh ShafieiBavani and Peter Zhong and Ying Xu and Antonio Jimeno Yepes
- Abstract summary: We show how progress has been made in benchmark development to measure understanding capabilities of AI methods.
We review as well how current methods develop understanding capabilities.
- Score: 10.113757840350727
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current Artificial Intelligence (AI) methods, most based on deep learning,
have facilitated progress in several fields, including computer vision and
natural language understanding. The progress of these AI methods is measured
using benchmarks designed to solve challenging tasks, such as visual question
answering. A question remains of how much understanding is leveraged by these
methods and how appropriate are the current benchmarks to measure understanding
capabilities. To answer these questions, we have analysed existing benchmarks
and their understanding capabilities, defined by a set of understanding
capabilities, and current research streams. We show how progress has been made
in benchmark development to measure understanding capabilities of AI methods
and we review as well how current methods develop understanding capabilities.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Toward enriched Cognitive Learning with XAI [44.99833362998488]
We introduce an intelligent system (CL-XAI) for Cognitive Learning which is supported by artificial intelligence (AI) tools.
The use of CL-XAI is illustrated with a game-inspired virtual use case where learners tackle problems to enhance problem-solving skills.
arXiv Detail & Related papers (2023-12-19T16:13:47Z) - Helpful, Misleading or Confusing: How Humans Perceive Fundamental
Building Blocks of Artificial Intelligence Explanations [11.667611038005552]
We take a step back from sophisticated predictive algorithms and look into explainability of simple decision-making models.
We aim to assess how people perceive comprehensibility of their different representations.
This allows us to capture how diverse stakeholders judge intelligibility of fundamental concepts that more elaborate artificial intelligence explanations are built from.
arXiv Detail & Related papers (2023-03-02T03:15:35Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Towards Benchmarking Explainable Artificial Intelligence Methods [0.0]
We use philosophy of science theories as an analytical lens with the goal of revealing, what can be expected, and more importantly, not expected, from methods that aim to explain decisions promoted by a neural network.
By conducting a case study we investigate a selection of explainability method's performance over two mundane domains, animals and headgear.
We lay bare that the usefulness of these methods relies on human domain knowledge and our ability to understand, generalise and reason.
arXiv Detail & Related papers (2022-08-25T14:28:30Z) - Process Knowledge-infused Learning for Suicidality Assessment on Social
Media [14.362199192484006]
Current methods rely on the traditional pipeline of predicting labels from data.
Post hoc explanations on the data to label prediction using explainable AI (XAI) models leave much to be desired to the end-users.
PK-iL utilizes a structured process knowledge that explicitly explains the underlying prediction process that makes sense to end-users.
arXiv Detail & Related papers (2022-04-26T19:43:41Z) - Towards Explainable Artificial Intelligence in Banking and Financial
Services [0.0]
We study and analyze the recent work done in Explainable Artificial Intelligence (XAI) methods and tools.
We introduce a novel XAI process, which facilitates producing explainable models while maintaining a high level of learning performance.
We develop a digital dashboard to facilitate interacting with the algorithm results.
arXiv Detail & Related papers (2021-12-14T08:02:13Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.