The Unreasonable Effectiveness of Deep Learning in Artificial
Intelligence
- URL: http://arxiv.org/abs/2002.04806v1
- Date: Wed, 12 Feb 2020 05:25:15 GMT
- Title: The Unreasonable Effectiveness of Deep Learning in Artificial
Intelligence
- Authors: Terrence J. Sejnowski
- Abstract summary: Deep learning networks have been trained to recognize speech, caption photographs and translate text between languages at high levels of performance.
Deep learning was inspired by the architecture of the cortex and may be found in other brain regions that are essential for planning and survival.
- Score: 1.5229257192293197
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning networks have been trained to recognize speech, caption
photographs and translate text between languages at high levels of performance.
Although applications of deep learning networks to real world problems have
become ubiquitous, our understanding of why they are so effective is lacking.
These empirical results should not be possible according to sample complexity
in statistics and non-convex optimization theory. However, paradoxes in the
training and effectiveness of deep learning networks are being investigated and
insights are being found in the geometry of high-dimensional spaces. A
mathematical theory of deep learning would illuminate how they function, allow
us to assess the strengths and weaknesses of different network architectures
and lead to major improvements. Deep learning has provided natural ways for
humans to communicate with digital devices and is foundational for building
artificial general intelligence. Deep learning was inspired by the architecture
of the cerebral cortex and insights into autonomy and general intelligence may
be found in other brain regions that are essential for planning and survival,
but major breakthroughs will be needed to achieve these goals.
Related papers
- Improving deep learning with prior knowledge and cognitive models: A
survey on enhancing explainability, adversarial robustness and zero-shot
learning [0.0]
We review current and emerging knowledge-informed and brain-inspired cognitive systems for realizing adversarial defenses.
Brain-inspired cognition methods use computational models that mimic the human mind to enhance intelligent behavior in artificial agents and autonomous robots.
arXiv Detail & Related papers (2024-03-11T18:11:00Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges [50.22269760171131]
The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods.
This text is concerned with exposing pre-defined regularities through unified geometric principles.
It provides a common mathematical framework to study the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers.
arXiv Detail & Related papers (2021-04-27T21:09:51Z) - Discussion of Ensemble Learning under the Era of Deep Learning [4.061135251278187]
Ensemble deep learning has shown significant performances in improving the generalization of learning system.
Time and space overheads for training multiple base deep learners and testing with the ensemble deep learner are far greater than that of traditional ensemble learning.
An urgent problem needs to be solved is how to take the significant advantages of ensemble deep learning while reduce the required time and space overheads.
arXiv Detail & Related papers (2021-01-21T01:33:23Z) - Deep Learning and the Global Workspace Theory [0.0]
Recent advances in deep learning have allowed Artificial Intelligence to reach near human-level performance in many sensory, perceptual, linguistic or cognitive tasks.
There is a growing need, however, for novel, brain-inspired cognitive architectures.
The Global Workspace theory refers to a large-scale system integrating and distributing information among networks of specialized modules to create higher-level forms of cognition and awareness.
arXiv Detail & Related papers (2020-12-04T11:36:01Z) - Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness [63.627760598441796]
We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
arXiv Detail & Related papers (2020-10-19T16:03:46Z) - D2RL: Deep Dense Architectures in Reinforcement Learning [47.67475810050311]
We take inspiration from successful architectural choices in computer vision and generative modelling.
We investigate the use of deeper networks and dense connections for reinforcement learning on a variety of simulated robotic learning benchmark environments.
arXiv Detail & Related papers (2020-10-19T01:27:07Z) - Memristors -- from In-memory computing, Deep Learning Acceleration,
Spiking Neural Networks, to the Future of Neuromorphic and Bio-inspired
Computing [25.16076541420544]
Machine learning, particularly in the form of deep learning, has driven most of the recent fundamental developments in artificial intelligence.
Deep learning has been successfully applied in areas such as object/pattern recognition, speech and natural language processing, self-driving vehicles, intelligent self-diagnostics tools, autonomous robots, knowledgeable personal assistants, and monitoring.
This paper reviews the case for a novel beyond CMOS hardware technology, memristors, as a potential solution for the implementation of power-efficient in-memory computing, deep learning accelerators, and spiking neural networks.
arXiv Detail & Related papers (2020-04-30T16:49:03Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.