Why We Don't Have AGI Yet
- URL: http://arxiv.org/abs/2308.03598v4
- Date: Tue, 19 Sep 2023 12:43:54 GMT
- Title: Why We Don't Have AGI Yet
- Authors: Peter Voss and Mladjan Jovanovic
- Abstract summary: The original vision of AI was re-articulated in 2002 via the term 'Artificial General Intelligence' or AGI.
This vision is to build 'Thinking Machines' - computer systems that can learn, reason, and solve problems similar to the way humans do.
While several large-scale efforts have nominally been working on AGI, the field of pure focused AGI development has not been well funded or promoted.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The original vision of AI was re-articulated in 2002 via the term 'Artificial
General Intelligence' or AGI. This vision is to build 'Thinking Machines' -
computer systems that can learn, reason, and solve problems similar to the way
humans do. This is in stark contrast to the 'Narrow AI' approach practiced by
almost everyone in the field over the many decades. While several large-scale
efforts have nominally been working on AGI (most notably DeepMind), the field
of pure focused AGI development has not been well funded or promoted. This is
surprising given the fantastic value that true AGI can bestow on humanity. In
addition to the dearth of effort in this field, there are also several
theoretical and methodical missteps that are hampering progress. We highlight
why purely statistical approaches are unlikely to lead to AGI, and identify
several crucial cognitive abilities required to achieve human-like adaptability
and autonomous learning. We conclude with a survey of socio-technical factors
that have undoubtedly slowed progress towards AGI.
Related papers
- "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - Development of an Adaptive Multi-Domain Artificial Intelligence System Built using Machine Learning and Expert Systems Technologies [0.0]
An artificial general intelligence (AGI) has been an elusive goal in artificial intelligence (AI) research for some time.
An AGI would have the capability, like a human, to be exposed to a new problem domain, learn about it and then use reasoning processes to make decisions.
This paper presents a small step towards producing an AGI.
arXiv Detail & Related papers (2024-06-17T07:21:44Z) - How Far Are We From AGI [15.705756259264932]
The evolution of artificial intelligence (AI) has profoundly impacted human society, driving significant advancements in multiple sectors.
Yet, the escalating demands on AI have highlighted the limitations of AI's current offerings, catalyzing a movement towards Artificial General Intelligence (AGI)
AGI, distinguished by its ability to execute diverse real-world tasks with efficiency and effectiveness comparable to human intelligence, reflects a paramount milestone in AI evolution.
This paper delves into the pivotal questions of our proximity to AGI and the strategies necessary for its realization through extensive surveys, discussions, and original perspectives.
arXiv Detail & Related papers (2024-05-16T17:59:02Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - Concepts is All You Need: A More Direct Path to AGI [0.0]
Little progress has been made toward AGI (Artificial General Intelligence) since the term was coined some 20 years ago.
Here we outline an architecture and development plan, together with some preliminary results, that offers a much more direct path to full Human-Level AI (HLAI)/ AGI.
arXiv Detail & Related papers (2023-09-04T14:14:41Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - When Brain-inspired AI Meets AGI [40.96159978312796]
We provide a comprehensive overview of brain-inspired AI from the perspective of Artificial General Intelligence.
We begin with the current progress in brain-inspired AI and its extensive connection with AGI.
We then cover the important characteristics for both human intelligence and AGI.
arXiv Detail & Related papers (2023-03-28T12:46:38Z) - The Alignment Problem from a Deep Learning Perspective [3.9772843346304763]
We argue that, without substantial effort to prevent it, AGIs could learn to pursue goals that are in conflict with human interests.
If trained like today's most capable models, AGIs could learn to act deceptively to receive higher reward.
We briefly outline how the deployment of misaligned AGIs might irreversibly undermine human control over the world.
arXiv Detail & Related papers (2022-08-30T02:12:47Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.