Less is More: some Computational Principles based on Parcimony, and Limitations of Natural Intelligence
- URL: http://arxiv.org/abs/2506.07060v1
- Date: Sun, 08 Jun 2025 09:42:29 GMT
- Title: Less is More: some Computational Principles based on Parcimony, and Limitations of Natural Intelligence
- Authors: Laura Cohen, Xavier Hinaut, Lilyana Petrova, Alexandre Pitti, Syd Reynal, Ichiro Tsuda,
- Abstract summary: Natural intelligence consistently achieves more with less.<n>Today's AI relies on virtually unlimited computational power, energy, and data to reach high performance.<n>This paper argues that constraints in NI are paradoxically catalysts for efficiency, adaptability, and creativity.
- Score: 39.89179121430488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Natural intelligence (NI) consistently achieves more with less. Infants learn language, develop abstract concepts, and acquire sensorimotor skills from sparse data, all within tight neural and energy limits. In contrast, today's AI relies on virtually unlimited computational power, energy, and data to reach high performance. This paper argues that constraints in NI are paradoxically catalysts for efficiency, adaptability, and creativity. We first show how limited neural bandwidth promotes concise codes that still capture complex patterns. Spiking neurons, hierarchical structures, and symbolic-like representations emerge naturally from bandwidth constraints, enabling robust generalization. Next, we discuss chaotic itinerancy, illustrating how the brain transits among transient attractors to flexibly retrieve memories and manage uncertainty. We then highlight reservoir computing, where random projections facilitate rapid generalization from small datasets. Drawing on developmental perspectives, we emphasize how intrinsic motivation, along with responsive social environments, drives infant language learning and discovery of meaning. Such active, embodied processes are largely absent in current AI. Finally, we suggest that adopting 'less is more' principles -- energy constraints, parsimonious architectures, and real-world interaction -- can foster the emergence of more efficient, interpretable, and biologically grounded artificial systems.
Related papers
- Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [58.58177409853298]
Current AI systems, such as large language models, remain disembodied, unable to physically engage with the world.<n>At the core of this challenge lies the concept of Neural Brain, a central intelligence system designed to drive embodied agents with human-like adaptability.<n>This paper introduces a unified framework for the Neural Brain of embodied agents, addressing two fundamental challenges.
arXiv Detail & Related papers (2025-05-12T15:05:34Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning where representations are compressed in order to represent more abstract concepts in deeper layers of the network.<n>We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.<n>We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, quantification, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Neuromorphic Computing and Sensing in Space [69.34740063574921]
Neuromorphic computer chips are designed to mimic the architecture of a biological brain.
The emphasis on low power and energy efficiency of neuromorphic devices is a perfect match for space applications.
arXiv Detail & Related papers (2022-12-10T07:46:29Z) - Sequence learning in a spiking neuronal network with memristive synapses [0.0]
A core concept that lies at the heart of brain computation is sequence learning and prediction.
Neuromorphic hardware emulates the way the brain processes information and maps neurons and synapses directly into a physical substrate.
We study the feasibility of using ReRAM devices as a replacement of the biological synapses in the sequence learning model.
arXiv Detail & Related papers (2022-11-29T21:07:23Z) - Towards the Neuroevolution of Low-level Artificial General Intelligence [5.2611228017034435]
We argue that the search for Artificial General Intelligence (AGI) should start from a much lower level than human-level intelligence.
Our hypothesis is that learning occurs through sensory feedback when an agent acts in an environment.
We evaluate a method to evolve a biologically-inspired artificial neural network that learns from environment reactions.
arXiv Detail & Related papers (2022-07-27T15:30:50Z) - Learning Continuous Chaotic Attractors with a Reservoir Computer [0.0]
We train a 1000-neuron RNN to abstract a continuous dynamical attractor memory from isolated examples of dynamical attractor memories.
By training the RC on isolated and shifted examples of either stable limit cycles or chaotic Lorenz attractors, the RC learns a continuum of attractors, as quantified by an extra Lyapunov exponent equal to zero.
arXiv Detail & Related papers (2021-10-16T18:07:27Z) - Adaptive Rational Activations to Boost Deep Reinforcement Learning [68.10769262901003]
We motivate why rationals are suitable for adaptable activation functions and why their inclusion into neural networks is crucial.
We demonstrate that equipping popular algorithms with (recurrent-)rational activations leads to consistent improvements on Atari games.
arXiv Detail & Related papers (2021-02-18T14:53:12Z) - Optimality of short-term synaptic plasticity in modelling certain
dynamic environments [0.5371337604556311]
Bayes-optimal prediction and inference of randomly but continuously transforming environments relies on short-term spike-timing-dependent plasticity.
Strikingly, this also introduces a biologically-modelled AI, the first to overcome multiple limitations of deep learning and outperform artificial neural networks in a visual task.
Results link short-term plasticity to high-level cortical function, suggest optimality of natural intelligence for natural environments, and neuromorphic AI from mere efficiency to computational supremacy altogether.
arXiv Detail & Related papers (2020-09-15T01:04:28Z) - Learning as Reinforcement: Applying Principles of Neuroscience for More
General Reinforcement Learning Agents [1.0742675209112622]
We implement an architecture founded in principles of experimental neuroscience, by combining computationally efficient abstractions of biological algorithms.
Our approach is inspired by research on spike-timing dependent plasticity, the transition between short and long term memory, and the role of various neurotransmitters in rewarding curiosity.
The Neurons-in-a-Box architecture can learn in a wholly generalizable manner, and demonstrates an efficient way to build and apply representations without explicitly optimizing over a set of criteria or actions.
arXiv Detail & Related papers (2020-04-20T04:06:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.