A Finite-Time Technological Singularity Model With Artificial
Intelligence Self-Improvement
- URL: http://arxiv.org/abs/2010.01961v1
- Date: Mon, 31 Aug 2020 15:29:14 GMT
- Title: A Finite-Time Technological Singularity Model With Artificial
Intelligence Self-Improvement
- Authors: Ihor Kendiukhov
- Abstract summary: We build a model of finite-time technological singularity assuming that artificial intelligence will replace humans for artificial intelligence engineers.
Although infinite level of development of artificial intelligence cannot be reached practically, this approximation is useful for several reasons.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in the development of artificial intelligence, technological
progress acceleration, long-term trends of macroeconomic dynamics increase the
relevance of technological singularity hypothesis. In this paper, we build a
model of finite-time technological singularity assuming that artificial
intelligence will replace humans for artificial intelligence engineers after
some point in time when it is developed enough. This model implies the
following: let A be the level of development of artificial intelligence. Then,
the moment of technological singularity n is defined as the point in time where
artificial intelligence development function approaches infinity. Thus, it
happens in finite time. Although infinite level of development of artificial
intelligence cannot be reached practically, this approximation is useful for
several reasons, firstly because it allows modeling a phase transition or a
change of regime. In the model, intelligence growth function appears to be
hyperbolic function under relatively broad conditions which we list and
compare. Subsequently, we also add a stochastic term (Brownian motion) to the
model and investigate the changes in its behavior. The results can be applied
for the modeling of dynamics of various processes characterized by
multiplicative growth.
Related papers
- Personalized Artificial General Intelligence (AGI) via Neuroscience-Inspired Continuous Learning Systems [3.764721243654025]
Current approaches largely depend on expanding model parameters, which improves task-specific performance but falls short in enabling continuous, adaptable, and generalized learning.
This paper reviews the state of continual learning and neuroscience-inspired AI, and proposes a novel architecture for Personalized AGI that integrates brain-like learning mechanisms for edge deployment.
Building on these insights, we outline an AI architecture that features complementary fast-and-slow learning modules, synaptic self-optimization, and memory-efficient model updates to support on-device lifelong adaptation.
arXiv Detail & Related papers (2025-04-27T16:10:17Z) - Will the Technological Singularity Come Soon? Modeling the Dynamics of Artificial Intelligence Development via Multi-Logistic Growth Process [14.189936229835222]
The development of AI technologies could be characterized by the superposition of multiple logistic growth processes.
Around 2024 marks the fastest point of the current AI wave.
The deep learning-based AI technologies are projected to decline around 2035-2040 if no fundamental technological innovation emerges.
arXiv Detail & Related papers (2025-02-11T03:11:42Z) - Can transformative AI shape a new age for our civilization?: Navigating between speculation and reality [8.255197802529118]
Artificial Intelligence is widely regarded as a transformative force with the potential to redefine numerous sectors of human civilization.
This work explores the historical precedents of technological breakthroughs, examining whether Artificial Intelligence can achieve a comparable impact.
We end with a critical inquiry into whether reaching a transformative Artificial Intelligence might compel humanity to adopt an entirely new ethical approach.
arXiv Detail & Related papers (2024-12-11T10:44:47Z) - Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI [6.8894258727040665]
We explore the interplay between human and machine intelligence, focusing on the crucial role humans play in developing ethical, responsible, and robust intelligent systems.
We propose future perspectives, capitalizing on the advantages of symbiotic designs to suggest a human-centered direction for next-generation AI development.
arXiv Detail & Related papers (2024-09-24T12:02:20Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - Co-evolutionary hybrid intelligence [0.3007949058551534]
The current approach to the development of intelligent systems is data-centric.
The article discusses an alternative approach to the development of artificial intelligence systems based on human-machine hybridization and their co-evolution.
arXiv Detail & Related papers (2021-12-09T08:14:56Z) - Artificial Intelligence Technology analysis using Artificial
Intelligence patent through Deep Learning model and vector space model [0.1933681537640272]
We propose a method for keyword analysis within factors using artificial intelligence patent data sets for artificial intelligence technology analysis.
A case study of collecting and analyzing artificial intelligence patent data was conducted to show how the proposed model can be applied to real world problems.
arXiv Detail & Related papers (2021-11-08T00:10:49Z) - Dynamic Cognition Applied to Value Learning in Artificial Intelligence [0.0]
Several researchers in the area are trying to develop a robust, beneficial, and safe concept of artificial intelligence.
It is of utmost importance that artificial intelligent agents have their values aligned with human values.
A possible approach to this problem would be to use theoretical models such as SED.
arXiv Detail & Related papers (2020-05-12T03:58:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.