A new solution and concrete implementation steps for Artificial General
Intelligence
- URL: http://arxiv.org/abs/2308.09721v1
- Date: Sat, 12 Aug 2023 13:31:02 GMT
- Title: A new solution and concrete implementation steps for Artificial General
Intelligence
- Authors: Yongcong Chen, Ting Zeng and Jun Zhang
- Abstract summary: In areas that need to interact with the actual environment, such as elderly care, home nanny, agricultural production, vehicle driving, trial and error are expensive.
In this paper, we analyze the limitations of the technical route of large models, and by addressing these limitations, we propose solutions.
- Score: 4.320142895840622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: At present, the mainstream artificial intelligence generally adopts the
technical path of "attention mechanism + deep learning" + "reinforcement
learning". It has made great progress in the field of AIGC (Artificial
Intelligence Generated Content), setting off the technical wave of big models[
2][13 ]. But in areas that need to interact with the actual environment, such
as elderly care, home nanny, agricultural production, and vehicle driving,
trial and error are expensive and a reinforcement learning process that
requires much trial and error is difficult to achieve. Therefore, in order to
achieve Artificial General Intelligence(AGI) that can be applied to any field,
we need to use both existing technologies and solve the defects of existing
technologies, so as to further develop the technological wave of artificial
intelligence. In this paper, we analyze the limitations of the technical route
of large models, and by addressing these limitations, we propose solutions,
thus solving the inherent defects of large models. In this paper, we will
reveal how to achieve true AGI step by step.
Related papers
- VisualPredicator: Learning Abstract World Models with Neuro-Symbolic Predicates for Robot Planning [86.59849798539312]
We present Neuro-Symbolic Predicates, a first-order abstraction language that combines the strengths of symbolic and neural knowledge representations.
We show that our approach offers better sample complexity, stronger out-of-distribution generalization, and improved interpretability.
arXiv Detail & Related papers (2024-10-30T16:11:05Z) - System 2 Reasoning via Generality and Adaptation [5.806160172544203]
This paper explores the limitations of existing approaches in achieving advanced System 2 reasoning.
We propose four key research directions to address these gaps.
We aim to advance the ability to generalize and adapt, bringing computational models closer to the reasoning capabilities required for Artificial General Intelligence (AGI)
arXiv Detail & Related papers (2024-10-10T12:34:25Z) - Explanation, Debate, Align: A Weak-to-Strong Framework for Language Model Generalization [0.6629765271909505]
This paper introduces a novel approach to model alignment through weak-to-strong generalization in the context of language models.
Our results suggest that this facilitation-based approach not only enhances model performance but also provides insights into the nature of model alignment.
arXiv Detail & Related papers (2024-09-11T15:16:25Z) - Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision [98.97575836717931]
Current AI alignment methodologies rely on human-provided demonstrations or judgments.
This raises a challenging research question: How can we keep improving the systems when their capabilities have surpassed the levels of humans?
arXiv Detail & Related papers (2024-03-14T15:12:38Z) - Brain in a Vat: On Missing Pieces Towards Artificial General
Intelligence in Large Language Models [83.63242931107638]
We propose four characteristics of generally intelligent agents.
We argue that active engagement with objects in the real world delivers more robust signals for forming conceptual representations.
We conclude by outlining promising future research directions in the field of artificial general intelligence.
arXiv Detail & Related papers (2023-07-07T13:58:16Z) - Designing explainable artificial intelligence with active inference: A
framework for transparent introspection and decision-making [0.0]
We discuss how active inference can be leveraged to design explainable AI systems.
We propose an architecture for explainable AI systems using active inference.
arXiv Detail & Related papers (2023-06-06T21:38:09Z) - Procedure Planning in Instructional Videosvia Contextual Modeling and
Model-based Policy Learning [114.1830997893756]
This work focuses on learning a model to plan goal-directed actions in real-life videos.
We propose novel algorithms to model human behaviors through Bayesian Inference and model-based Imitation Learning.
arXiv Detail & Related papers (2021-10-05T01:06:53Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - A Metamodel and Framework for Artificial General Intelligence From
Theory to Practice [11.756425327193426]
This paper introduces a new metamodel-based knowledge representation that significantly improves autonomous learning and adaptation.
We have applied the metamodel to problems ranging from time series analysis, computer vision, and natural language understanding.
One surprising consequence of the metamodel is that it not only enables a new level of autonomous learning and optimal functioning for machine intelligences.
arXiv Detail & Related papers (2021-02-11T16:45:58Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.