Limitations of Deep Neural Networks: a discussion of G. Marcus' critical
appraisal of deep learning
- URL: http://arxiv.org/abs/2012.15754v1
- Date: Tue, 22 Dec 2020 12:11:19 GMT
- Title: Limitations of Deep Neural Networks: a discussion of G. Marcus' critical
appraisal of deep learning
- Authors: Stefanos Tsimenidis
- Abstract summary: Deep neural networks have been applied with great results in medical imaging, semi-autonomous vehicles, ecommerce, genetics research, speech recognition, particle physics, experimental art, economic forecasting, environmental science, industrial manufacturing, and a wide variety of applications in nearly every field.
This study examines some of the limitations of deep neural networks, with the intention of pointing towards potential paths for future research, and of clearing up some metaphysical misconceptions, held by numerous researchers, that may misdirect them.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks have triggered a revolution in artificial intelligence,
having been applied with great results in medical imaging, semi-autonomous
vehicles, ecommerce, genetics research, speech recognition, particle physics,
experimental art, economic forecasting, environmental science, industrial
manufacturing, and a wide variety of applications in nearly every field. This
sudden success, though, may have intoxicated the research community and blinded
them to the potential pitfalls of assigning deep learning a higher status than
warranted. Also, research directed at alleviating the weaknesses of deep
learning may seem less attractive to scientists and engineers, who focus on the
low-hanging fruit of finding more and more applications for deep learning
models, thus letting short-term benefits hamper long-term scientific progress.
Gary Marcus wrote a paper entitled Deep Learning: A Critical Appraisal, and
here we discuss Marcus' core ideas, as well as attempt a general assessment of
the subject. This study examines some of the limitations of deep neural
networks, with the intention of pointing towards potential paths for future
research, and of clearing up some metaphysical misconceptions, held by numerous
researchers, that may misdirect them.
Related papers
- Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z) - Philosophy of Cognitive Science in the Age of Deep Learning [0.0]
Deep learning has enabled major advances across most areas of artificial intelligence research.
This perspective paper surveys key areas where their contributions can be especially fruitful.
arXiv Detail & Related papers (2024-05-07T06:39:47Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Deep Reinforcement Learning and its Neuroscientific Implications [19.478332877763417]
The emergence of powerful artificial intelligence is defining new research directions in neuroscience.
Deep reinforcement learning (Deep RL) offers a framework for studying the interplay among learning, representation and decision-making.
Deep RL offers a new set of research tools and a wide range of novel hypotheses.
arXiv Detail & Related papers (2020-07-07T19:27:54Z) - Off-the-shelf deep learning is not enough: parsimony, Bayes and
causality [0.8602553195689513]
We discuss opportunities and roadblocks to implementation of deep learning within materials science.
We argue that deep learning and AI are now well positioned to revolutionize fields where causal links are known.
arXiv Detail & Related papers (2020-05-04T15:16:30Z) - Adversarial Examples and the Deeper Riddle of Induction: The Need for a
Theory of Artifacts in Deep Learning [0.0]
I argue that adversarial examples will become a flashpoint of debate in philosophy and diverse sciences.
I argue that adversarial examples will become a flashpoint of debate in philosophy and diverse sciences.
arXiv Detail & Related papers (2020-03-20T16:24:25Z) - On Interpretability of Artificial Neural Networks: A Survey [21.905647127437685]
We systematically review recent studies in understanding the mechanism of neural networks, describe applications of interpretability especially in medicine.
We discuss future directions of interpretability research, such as in relation to fuzzy logic and brain science.
arXiv Detail & Related papers (2020-01-08T13:40:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.