The Nature of Intelligence
- URL: http://arxiv.org/abs/2307.11114v3
- Date: Mon, 19 Feb 2024 05:46:44 GMT
- Title: The Nature of Intelligence
- Authors: Barco Jie You
- Abstract summary: The essence of intelligence commonly represented by both humans and AI is unknown.
We show that the nature of intelligence is a series of mathematically functional processes that minimize system entropy.
This essay should be a starting point for a deeper understanding of the universe and us as human beings.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The human brain is the substrate for human intelligence. By simulating the
human brain, artificial intelligence builds computational models that have
learning capabilities and perform intelligent tasks approaching the human
level. Deep neural networks consist of multiple computation layers to learn
representations of data and improve the state-of-the-art in many recognition
domains. However, the essence of intelligence commonly represented by both
humans and AI is unknown. Here, we show that the nature of intelligence is a
series of mathematically functional processes that minimize system entropy by
establishing functional relationships between datasets over the space and time.
Humans and AI have achieved intelligence by implementing these entropy-reducing
processes in a reinforced manner that consumes energy. With this hypothesis, we
establish mathematical models of language, unconsciousness and consciousness,
predicting the evidence to be found by neuroscience and achieved by AI
engineering. Furthermore, a conclusion is made that the total entropy of the
universe is conservative, and the intelligence counters the spontaneous
processes to decrease entropy by physically or informationally connecting
datasets that originally exist in the universe but are separated across the
space and time. This essay should be a starting point for a deeper
understanding of the universe and us as human beings and for achieving
sophisticated AI models that are tantamount to human intelligence or even
superior. Furthermore, this essay argues that more advanced intelligence than
humans should exist if only it reduces entropy in a more efficient
energy-consuming way.
Related papers
- Bio-inspired AI: Integrating Biological Complexity into Artificial Intelligence [0.0]
The pursuit of creating artificial intelligence mirrors our longstanding fascination with understanding our own intelligence.
Recent advances in AI hold promise, but singular approaches often fall short in capturing the essence of intelligence.
This paper explores how fundamental principles from biological computation can guide the design of truly intelligent systems.
arXiv Detail & Related papers (2024-11-22T02:55:39Z) - AI-as-exploration: Navigating intelligence space [0.05657375260432172]
I articulate the contours of a rather neglected but central scientific role that AI has to play.
The basic thrust of AI-as-exploration is that of creating and studying systems that can reveal candidate building blocks of intelligence.
arXiv Detail & Related papers (2024-01-15T21:06:20Z) - On a Functional Definition of Intelligence [0.0]
Without an agreed-upon definition of intelligence, asking "is this system intelligent?"" is an untestable question.
Most work on precisely capturing what we mean by "intelligence" has come from the fields of philosophy, psychology, and cognitive science.
We present an argument for a purely functional, black-box definition of intelligence, distinct from how that intelligence is actually achieved.
arXiv Detail & Related papers (2023-12-15T05:46:49Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Genes in Intelligent Agents [45.93363823594323]
Animals are born with some intelligence encoded in their genes, but machines lack such intelligence and learn from scratch.
Inspired by the genes of animals, we define the genes'' of machines named as the learngenes'' and propose the Genetic Reinforcement Learning (GRL)
GRL is a computational framework that simulates the evolution of organisms in reinforcement learning (RL) and leverages the learngenes to learn and evolve the intelligence agents.
arXiv Detail & Related papers (2023-06-17T01:24:11Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - Making AI 'Smart': Bridging AI and Cognitive Science [0.0]
With the integration of cognitive science, the 'artificial' characteristic of Artificial Intelligence might soon be replaced with'smart'
This will help develop more powerful AI systems and simultaneously gives us a better understanding of how the human brain works.
We argue that the possibility of AI taking over human civilization is low as developing such an advanced system requires a better understanding of the human brain first.
arXiv Detail & Related papers (2021-12-31T09:30:44Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z) - Is Intelligence Artificial? [0.0]
This paper attempts to give a unifying definition that can be applied to the natural world in general and then Artificial Intelligence.
A metric that is grounded in Kolmogorov's Complexity Theory is suggested, which leads to a measurement about entropy.
A version of an accepted AI test is then put forward as the 'acid test' and might be what a free-thinking program would try to achieve.
arXiv Detail & Related papers (2014-03-05T11:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.