When we can trust computers (and when we can't)
- URL: http://arxiv.org/abs/2007.03741v1
- Date: Wed, 8 Jul 2020 08:55:53 GMT
- Title: When we can trust computers (and when we can't)
- Authors: Peter V. Coveney and Roger R. Highfield
- Abstract summary: In the domains of science and engineering that are relatively simple and firmly grounded in theory, these methods are indeed powerful.
The rise of big data and machine learning pose new challenges to computation, while lacking true explanatory power.
In the long-term, renewed emphasis on analogue methods will be necessary to temper the excessive faith currently placed in digital computation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the relentless rise of computer power, there is a widespread expectation
that computers can solve the most pressing problems of science, and even more
besides. We explore the limits of computational modelling and conclude that, in
the domains of science and engineering that are relatively simple and firmly
grounded in theory, these methods are indeed powerful. Even so, the
availability of code, data and documentation, along with a range of techniques
for validation, verification and uncertainty quantification, are essential for
building trust in computer generated findings. When it comes to complex systems
in domains of science that are less firmly grounded in theory, notably biology
and medicine, to say nothing of the social sciences and humanities, computers
can create the illusion of objectivity, not least because the rise of big data
and machine learning pose new challenges to reproducibility, while lacking true
explanatory power. We also discuss important aspects of the natural world which
cannot be solved by digital means. In the long-term, renewed emphasis on
analogue methods will be necessary to temper the excessive faith currently
placed in digital computation.
Related papers
- Artificial intelligence for science: The easy and hard problems [1.8722948221596285]
We study the cognitive science of scientists to understand how humans solve the hard problem.
We use the results to design new computational agents that automatically infer and update their scientific paradigms.
arXiv Detail & Related papers (2024-08-24T18:22:06Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Reliable AI: Does the Next Generation Require Quantum Computing? [71.84486326350338]
We show that digital hardware is inherently constrained in solving problems about optimization, deep learning, or differential equations.
In contrast, analog computing models, such as the Blum-Shub-Smale machine, exhibit the potential to surmount these limitations.
arXiv Detail & Related papers (2023-07-03T19:10:45Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - When not to use machine learning: a perspective on potential and
limitations [0.0]
We highlight the guiding principles of data-driven modeling, how these principles imbue models with almost magical predictive power.
We hope that the discussion to follow provides researchers throughout the sciences with a better understanding of when said techniques are appropriate.
arXiv Detail & Related papers (2022-10-06T04:00:00Z) - A Computational Inflection for Scientific Discovery [48.176406062568674]
We stand at the foot of a significant inflection in the trajectory of scientific discovery.
As society continues on its fast-paced digital transformation, so does humankind's collective scientific knowledge.
Computer science is poised to ignite a revolution in the scientific process itself.
arXiv Detail & Related papers (2022-05-04T11:36:54Z) - Neurocompositional computing: From the Central Paradox of Cognition to a
new generation of AI systems [120.297940190903]
Recent progress in AI has resulted from the use of limited forms of neurocompositional computing.
New, deeper forms of neurocompositional computing create AI systems that are more robust, accurate, and comprehensible.
arXiv Detail & Related papers (2022-05-02T18:00:10Z) - Off-the-shelf deep learning is not enough: parsimony, Bayes and
causality [0.8602553195689513]
We discuss opportunities and roadblocks to implementation of deep learning within materials science.
We argue that deep learning and AI are now well positioned to revolutionize fields where causal links are known.
arXiv Detail & Related papers (2020-05-04T15:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.