Deep Learning Opacity in Scientific Discovery
- URL: http://arxiv.org/abs/2206.00520v2
- Date: Fri, 26 Aug 2022 12:49:29 GMT
- Title: Deep Learning Opacity in Scientific Discovery
- Authors: Eamon Duede
- Abstract summary: I argue that the disconnect between philosophical pessimism and scientific optimism is driven by a failure to examine how AI is actually used in science.
I show that, in order to understand the justification for AI-powered breakthroughs, philosophers must examine the role played by deep learning as part of a wider process of discovery.
- Score: 0.15229257192293197
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Philosophers have recently focused on critical, epistemological challenges
that arise from the opacity of deep neural networks. One might conclude from
this literature that doing good science with opaque models is exceptionally
challenging, if not impossible. Yet, this is hard to square with the recent
boom in optimism for AI in science alongside a flood of recent scientific
breakthroughs driven by AI methods. In this paper, I argue that the disconnect
between philosophical pessimism and scientific optimism is driven by a failure
to examine how AI is actually used in science. I show that, in order to
understand the epistemic justification for AI-powered breakthroughs,
philosophers must examine the role played by deep learning as part of a wider
process of discovery. The philosophical distinction between the 'context of
discovery' and the 'context of justification' is helpful in this regard. I
demonstrate the importance of attending to this distinction with two cases
drawn from the scientific literature, and show that epistemic opacity need not
diminish AI's capacity to lead scientists to significant and justifiable
breakthroughs.
Related papers
- Problems in AI, their roots in philosophy, and implications for science and society [0.0]
More attention should be paid to the philosophical aspects of AI technology and its use.
It is argued that this deficit is generally combined with philosophical misconceptions about the growth of knowledge.
arXiv Detail & Related papers (2024-07-22T14:38:54Z) - Explain the Black Box for the Sake of Science: the Scientific Method in the Era of Generative Artificial Intelligence [0.9065034043031668]
The scientific method is the cornerstone of human progress across all branches of the natural and applied sciences.
We argue that human complex reasoning for scientific discovery remains of vital importance, at least before the advent of artificial general intelligence.
Knowing what data AI systems deemed important to make decisions can be a point of contact with domain experts and scientists.
arXiv Detail & Related papers (2024-06-15T08:34:42Z) - Philosophy of Cognitive Science in the Age of Deep Learning [0.0]
Deep learning has enabled major advances across most areas of artificial intelligence research.
This perspective paper surveys key areas where their contributions can be especially fruitful.
arXiv Detail & Related papers (2024-05-07T06:39:47Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems [268.585904751315]
New area of research known as AI for science (AI4Science)
Areas aim at understanding the physical world from subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales.
Key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods.
arXiv Detail & Related papers (2023-07-17T12:14:14Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Learning from learning machines: a new generation of AI technology to
meet the needs of science [59.261050918992325]
We outline emerging opportunities and challenges to enhance the utility of AI for scientific discovery.
The distinct goals of AI for industry versus the goals of AI for science create tension between identifying patterns in data versus discovering patterns in the world from data.
arXiv Detail & Related papers (2021-11-27T00:55:21Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Off-the-shelf deep learning is not enough: parsimony, Bayes and
causality [0.8602553195689513]
We discuss opportunities and roadblocks to implementation of deep learning within materials science.
We argue that deep learning and AI are now well positioned to revolutionize fields where causal links are known.
arXiv Detail & Related papers (2020-05-04T15:16:30Z) - Adversarial Examples and the Deeper Riddle of Induction: The Need for a
Theory of Artifacts in Deep Learning [0.0]
I argue that adversarial examples will become a flashpoint of debate in philosophy and diverse sciences.
I argue that adversarial examples will become a flashpoint of debate in philosophy and diverse sciences.
arXiv Detail & Related papers (2020-03-20T16:24:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.