The Construction of Reality in an AI: A Review
- URL: http://arxiv.org/abs/2302.05448v1
- Date: Fri, 3 Feb 2023 22:52:17 GMT
- Title: The Construction of Reality in an AI: A Review
- Authors: Jeffrey W. Johnston
- Abstract summary: This paper aims to increase awareness of constructivist AI implementations.
It builds on Guerin's 2008 "Learning Like a Baby: A Survey of AI approaches"
The focus is on knowledge representations and learning algorithms that have been used in practice viewed through lenses of Piaget's schemas.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI constructivism as inspired by Jean Piaget, described and surveyed by Frank
Guerin, and representatively implemented by Gary Drescher seeks to create
algorithms and knowledge structures that enable agents to acquire, maintain,
and apply a deep understanding of the environment through sensorimotor
interactions. This paper aims to increase awareness of constructivist AI
implementations to encourage greater progress toward enabling lifelong learning
by machines. It builds on Guerin's 2008 "Learning Like a Baby: A Survey of AI
approaches." After briefly recapitulating that survey, it summarizes subsequent
progress by the Guerin referents, numerous works not covered by Guerin (or
found in other surveys), and relevant efforts in related areas. The focus is on
knowledge representations and learning algorithms that have been used in
practice viewed through lenses of Piaget's schemas, adaptation processes, and
staged development. The paper concludes with a preview of a simple framework
for constructive AI being developed by the author that parses concepts from
sensory input and stores them in a semantic memory network linked to episodic
data. Extensive references are provided.
Related papers
- Combining Cognitive and Generative AI for Self-explanation in Interactive AI Agents [1.1259354267881174]
This study investigates the convergence of cognitive AI and generative AI for self-explanation in interactive AI agents such as VERA.
From a cognitive AI viewpoint, we endow VERA with a functional model of its own design, knowledge, and reasoning represented in the Task--Method--Knowledge (TMK) language.
From the perspective of generative AI, we use ChatGPT, LangChain, and Chain-of-Thought to answer user questions based on the VERA TMK model.
arXiv Detail & Related papers (2024-07-25T18:46:11Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - SOK-Bench: A Situated Video Reasoning Benchmark with Aligned Open-World Knowledge [60.76719375410635]
We propose a new benchmark (SOK-Bench) consisting of 44K questions and 10K situations with instance-level annotations depicted in the videos.
The reasoning process is required to understand and apply situated knowledge and general knowledge for problem-solving.
We generate associated question-answer pairs and reasoning processes, finally followed by manual reviews for quality assurance.
arXiv Detail & Related papers (2024-05-15T21:55:31Z) - The Artificial Intelligence Ontology: LLM-assisted construction of AI concept hierarchies [0.7796141041639462]
The Artificial Intelligence Ontology (AIO) is a systematization of artificial intelligence (AI) concepts, methodologies, and their interrelations.
AIO aims to address the rapidly evolving landscape of AI by providing a comprehensive framework that encompasses both technical and ethical aspects of AI technologies.
arXiv Detail & Related papers (2024-04-03T20:08:15Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Transferring Procedural Knowledge across Commonsense Tasks [17.929737518694616]
We study the ability of AI models to transfer procedural knowledge to novel narrative tasks in a transparent manner.
We design LEAP: a comprehensive framework that integrates state-of-the-art modeling architectures, training regimes, and augmentation strategies.
Our experiments with in- and out-of-domain tasks reveal insights into the interplay of different architectures, training regimes, and augmentation strategies.
arXiv Detail & Related papers (2023-04-26T23:24:50Z) - How Generative AI models such as ChatGPT can be (Mis)Used in SPC
Practice, Education, and Research? An Exploratory Study [2.0841728192954663]
Generative Artificial Intelligence (AI) models have the potential to revolutionize Statistical Process Control (SPC) practice, learning, and research.
These tools are in the early stages of development and can be easily misused or misunderstood.
We explore ChatGPT's ability to provide code, explain basic concepts, and create knowledge related to SPC practice, learning, and research.
arXiv Detail & Related papers (2023-02-17T15:48:37Z) - elBERto: Self-supervised Commonsense Learning for Question Answering [131.51059870970616]
We propose a Self-supervised Bidirectional Representation Learning of Commonsense framework, which is compatible with off-the-shelf QA model architectures.
The framework comprises five self-supervised tasks to force the model to fully exploit the additional training signals from contexts containing rich commonsense.
elBERto achieves substantial improvements on out-of-paragraph and no-effect questions where simple lexical similarity comparison does not help.
arXiv Detail & Related papers (2022-03-17T16:23:45Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.