Approaches to Artificial General Intelligence: An Analysis
- URL: http://arxiv.org/abs/2202.03153v1
- Date: Sat, 29 Jan 2022 05:21:09 GMT
- Title: Approaches to Artificial General Intelligence: An Analysis
- Authors: Soumil Rathi
- Abstract summary: This paper is an analysis of the different methods proposed to achieve AGI, including Human Brain Emulation, AIXI and Integrated Cognitive Architecture.
It was concluded that while there are various methods to achieve AGI that could work, the most promising method to achieve AGI is Integrated Cognitive Architectures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper is an analysis of the different methods proposed to achieve AGI,
including Human Brain Emulation, AIXI and Integrated Cognitive Architecture.
First, the definition of AGI as used in this paper has been defined, and its
requirements have been stated. For each proposed method mentioned, the method
in question was summarized and its key processes were detailed, showcasing how
it functioned. Then, each method listed was analyzed, taking various factors
into consideration, such as technological requirements, computational ability,
and adequacy to the requirements. It was concluded that while there are various
methods to achieve AGI that could work, such as Human Brain Emulation and
Integrated Cognitive Architectures, the most promising method to achieve AGI is
Integrated Cognitive Architectures. This is because Human Brain Emulation was
found to require scanning technologies that will most likely not be available
until the 2030s, making it unlikely to be created before then. Moreover,
Integrated Cognitive Architectures has reduced computational requirements and a
suitable functionality for General Intelligence, making it the most likely way
to achieve AGI.
Related papers
- Development of an Adaptive Multi-Domain Artificial Intelligence System Built using Machine Learning and Expert Systems Technologies [0.0]
An artificial general intelligence (AGI) has been an elusive goal in artificial intelligence (AI) research for some time.
An AGI would have the capability, like a human, to be exposed to a new problem domain, learn about it and then use reasoning processes to make decisions.
This paper presents a small step towards producing an AGI.
arXiv Detail & Related papers (2024-06-17T07:21:44Z) - Pangu-Agent: A Fine-Tunable Generalist Agent with Structured Reasoning [50.47568731994238]
Key method for creating Artificial Intelligence (AI) agents is Reinforcement Learning (RL)
This paper presents a general framework model for integrating and learning structured reasoning into AI agents' policies.
arXiv Detail & Related papers (2023-12-22T17:57:57Z) - Levels of AGI for Operationalizing Progress on the Path to AGI [64.59151650272477]
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors.
This framework introduces levels of AGI performance, generality, and autonomy, providing a common language to compare models, assess risks, and measure progress along the path to AGI.
arXiv Detail & Related papers (2023-11-04T17:44:58Z) - Navigating the Complexity of Generative AI Adoption in Software
Engineering [6.190511747986327]
The adoption patterns of Generative Artificial Intelligence (AI) tools within software engineering are investigated.
Influencing factors at the individual, technological, and societal levels are analyzed.
arXiv Detail & Related papers (2023-07-12T11:05:19Z) - OpenAGI: When LLM Meets Domain Experts [51.86179657467822]
Human Intelligence (HI) excels at combining basic skills to solve complex tasks.
This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents.
We introduce OpenAGI, an open-source platform designed for solving multi-step, real-world tasks.
arXiv Detail & Related papers (2023-04-10T03:55:35Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - A curated, ontology-based, large-scale knowledge graph of artificial
intelligence tasks and benchmarks [4.04540578484476]
Intelligence Task Ontology and Knowledge Graph (ITO) is a comprehensive resource on artificial intelligence tasks, benchmark results and performance metrics.
ITO is a richly structured and manually curated resource on artificial intelligence tasks, benchmark results and performance metrics.
The goal of ITO is to enable precise and network-based analyses of the global landscape of AI tasks and capabilities.
arXiv Detail & Related papers (2021-10-04T13:25:53Z) - The whole brain architecture approach: Accelerating the development of
artificial general intelligence by referring to the brain [1.637145148171519]
It is difficult for an individual to design a software program that corresponds to the entire brain.
The whole-brain architecture approach divides the brain-inspired AGI development process into the task of designing the brain reference architecture.
This study proposes the Structure-constrained Interface Decomposition (SCID) method, which is a hypothesis-building method for creating a hypothetical component diagram.
arXiv Detail & Related papers (2021-03-06T04:58:12Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - A Metamodel and Framework for AGI [3.198144010381572]
We introduce the Deep Fusion Reasoning Engine (DFRE), which implements a knowledge-preserving metamodel and framework for constructing applied AGI systems.
DFRE exhibits some important fundamental knowledge properties such as clear distinctions between symmetric and antisymmetric relations.
Our experiments show that the proposed framework achieves 94% accuracy on average on unsupervised object detection and recognition.
arXiv Detail & Related papers (2020-08-28T23:34:21Z) - A Methodology for Creating AI FactSheets [67.65802440158753]
This paper describes a methodology for creating the form of AI documentation we call FactSheets.
Within each step of the methodology, we describe the issues to consider and the questions to explore.
This methodology will accelerate the broader adoption of transparent AI documentation.
arXiv Detail & Related papers (2020-06-24T15:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.