Abstraction and Analogy-Making in Artificial Intelligence
- URL: http://arxiv.org/abs/2102.10717v1
- Date: Mon, 22 Feb 2021 00:12:48 GMT
- Title: Abstraction and Analogy-Making in Artificial Intelligence
- Authors: Melanie Mitchell
- Abstract summary: No current AI system is anywhere close to a capability of forming humanlike abstractions or analogies.
This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conceptual abstraction and analogy-making are key abilities underlying
humans' abilities to learn, reason, and robustly adapt their knowledge to new
domains. Despite of a long history of research on constructing AI systems with
these abilities, no current AI system is anywhere close to a capability of
forming humanlike abstractions or analogies. This paper reviews the advantages
and limitations of several approaches toward this goal, including symbolic
methods, deep learning, and probabilistic program induction. The paper
concludes with several proposals for designing challenge tasks and evaluation
measures in order to make quantifiable and generalizable progress in this area.
Related papers
- Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - From Reals to Logic and Back: Inventing Symbolic Vocabularies, Actions,
and Models for Planning from Raw Data [20.01856556195228]
This paper presents the first approach for autonomously learning logic-based relational representations for abstract states and actions.
The learned representations constitute auto-invented PDDL-like domain models.
Empirical results in deterministic settings show that powerful abstract representations can be learned from just a handful of robot trajectories.
arXiv Detail & Related papers (2024-02-19T06:28:21Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Evaluating Understanding on Conceptual Abstraction Benchmarks [0.0]
A long-held objective in AI is to build systems that understand concepts in a humanlike way.
We argue that understanding a concept requires the ability to use it in varied contexts.
Our concept-based approach to evaluation reveals information about AI systems that conventional test sets would have left hidden.
arXiv Detail & Related papers (2022-06-28T17:52:46Z) - A Critical Review of Inductive Logic Programming Techniques for
Explainable AI [9.028858411921906]
Inductive Logic Programming (ILP) is a subfield of symbolic artificial intelligence.
ILP generates explainable first-order clausal theories from examples and background knowledge.
Existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances.
arXiv Detail & Related papers (2021-12-31T06:34:32Z) - Conceptual Modeling and Artificial Intelligence: Mutual Benefits from
Complementary Worlds [0.0]
We are interested in tackling the intersection of the two, thus far, mostly isolated approached disciplines of CM and AI.
The workshop embraces the assumption, that manifold mutual benefits can be realized by i) investigating what Conceptual Modeling (CM) can contribute to AI, and ii) the other way around.
arXiv Detail & Related papers (2021-10-16T18:42:09Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Thinking Fast and Slow in AI [38.8581204791644]
This paper proposes a research direction to advance AI which draws inspiration from cognitive theories of human decision making.
The premise is that if we gain insights about the causes of some human capabilities that are still lacking in AI, we may obtain similar capabilities in an AI system.
arXiv Detail & Related papers (2020-10-12T20:10:05Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.