Adversarial Examples and the Deeper Riddle of Induction: The Need for a
Theory of Artifacts in Deep Learning
- URL: http://arxiv.org/abs/2003.11917v1
- Date: Fri, 20 Mar 2020 16:24:25 GMT
- Title: Adversarial Examples and the Deeper Riddle of Induction: The Need for a
Theory of Artifacts in Deep Learning
- Authors: Cameron Buckner
- Abstract summary: I argue that adversarial examples will become a flashpoint of debate in philosophy and diverse sciences.
I argue that adversarial examples will become a flashpoint of debate in philosophy and diverse sciences.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning is currently the most widespread and successful technology in
artificial intelligence. It promises to push the frontier of scientific
discovery beyond current limits. However, skeptics have worried that deep
neural networks are black boxes, and have called into question whether these
advances can really be deemed scientific progress if humans cannot understand
them. Relatedly, these systems also possess bewildering new vulnerabilities:
most notably a susceptibility to "adversarial examples". In this paper, I argue
that adversarial examples will become a flashpoint of debate in philosophy and
diverse sciences. Specifically, new findings concerning adversarial examples
have challenged the consensus view that the networks' verdicts on these cases
are caused by overfitting idiosyncratic noise in the training set, and may
instead be the result of detecting predictively useful "intrinsic features of
the data geometry" that humans cannot perceive (Ilyas et al., 2019). These
results should cause us to re-examine responses to one of the deepest puzzles
at the intersection of philosophy and science: Nelson Goodman's "new riddle" of
induction. Specifically, they raise the possibility that progress in a number
of sciences will depend upon the detection and manipulation of useful features
that humans find inscrutable. Before we can evaluate this possibility, however,
we must decide which (if any) of these inscrutable features are real but
available only to "alien" perception and cognition, and which are distinctive
artifacts of deep learning-for artifacts like lens flares or Gibbs phenomena
can be similarly useful for prediction, but are usually seen as obstacles to
scientific theorizing. Thus, machine learning researchers urgently need to
develop a theory of artifacts for deep neural networks, and I conclude by
sketching some initial directions for this area of research.
Related papers
- A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Deep Learning Opacity in Scientific Discovery [0.15229257192293197]
I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a failure to examine how AI is actually used in science.
I show that, in order to understand the justification for AI-powered breakthroughs, philosophers must examine the role played by deep learning as part of a wider process of discovery.
arXiv Detail & Related papers (2022-06-01T14:30:49Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Limitations of Deep Neural Networks: a discussion of G. Marcus' critical
appraisal of deep learning [0.0]
Deep neural networks have been applied with great results in medical imaging, semi-autonomous vehicles, ecommerce, genetics research, speech recognition, particle physics, experimental art, economic forecasting, environmental science, industrial manufacturing, and a wide variety of applications in nearly every field.
This study examines some of the limitations of deep neural networks, with the intention of pointing towards potential paths for future research, and of clearing up some metaphysical misconceptions, held by numerous researchers, that may misdirect them.
arXiv Detail & Related papers (2020-12-22T12:11:19Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Scientific intuition inspired by machine learning generated hypotheses [2.294014185517203]
We shift the focus on the insights and the knowledge obtained by the machine learning models themselves.
We apply gradient boosting in decision trees to extract human interpretable insights from big data sets from chemistry and physics.
The ability to go beyond numerics opens the door to use machine learning to accelerate the discovery of conceptual understanding.
arXiv Detail & Related papers (2020-10-27T12:12:12Z) - Adversarial Examples on Object Recognition: A Comprehensive Survey [1.976652238476722]
Deep neural networks are at the forefront of machine learning research.
adversarial examples are intentionally designed to test the network's sensitivity to distribution drifts.
We discuss the impact of adversarial examples on security, safety, and robustness of neural networks.
arXiv Detail & Related papers (2020-08-07T08:51:21Z) - Off-the-shelf deep learning is not enough: parsimony, Bayes and
causality [0.8602553195689513]
We discuss opportunities and roadblocks to implementation of deep learning within materials science.
We argue that deep learning and AI are now well positioned to revolutionize fields where causal links are known.
arXiv Detail & Related papers (2020-05-04T15:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.