Artificial intelligence for science: The easy and hard problems
- URL: http://arxiv.org/abs/2408.14508v1
- Date: Sat, 24 Aug 2024 18:22:06 GMT
- Title: Artificial intelligence for science: The easy and hard problems
- Authors: Ruairidh M. Battleday, Samuel J. Gershman,
- Abstract summary: We study the cognitive science of scientists to understand how humans solve the hard problem.
We use the results to design new computational agents that automatically infer and update their scientific paradigms.
- Score: 1.8722948221596285
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A suite of impressive scientific discoveries have been driven by recent advances in artificial intelligence. These almost all result from training flexible algorithms to solve difficult optimization problems specified in advance by teams of domain scientists and engineers with access to large amounts of data. Although extremely useful, this kind of problem solving only corresponds to one part of science - the "easy problem." The other part of scientific research is coming up with the problem itself - the "hard problem." Solving the hard problem is beyond the capacities of current algorithms for scientific discovery because it requires continual conceptual revision based on poorly defined constraints. We can make progress on understanding how humans solve the hard problem by studying the cognitive science of scientists, and then use the results to design new computational agents that automatically infer and update their scientific paradigms.
Related papers
- SciCode: A Research Coding Benchmark Curated by Scientists [37.900374175754465]
Since language models (LMs) now outperform average humans on many challenging tasks, it has become increasingly difficult to develop challenging, high-quality, and realistic evaluations.
We created a scientist-curated coding benchmark, SciCode, which includes problems in mathematics, physics, chemistry, biology, and materials science.
Claude3.5-Sonnet, the best-performing model among those tested, can solve only 4.6% of the problems in the most realistic setting.
arXiv Detail & Related papers (2024-07-18T05:15:24Z) - Scalable Artificial Intelligence for Science: Perspectives, Methods and Exemplars [0.15705429611931054]
We propose that scaling up artificial intelligence on high-performance computing platforms is essential to address complex problems.
This perspective focuses on scientific use cases like cognitive simulations, large language models for scientific inquiry, medical image analysis, and physics-informed approaches.
arXiv Detail & Related papers (2024-06-24T20:29:29Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems [268.585904751315]
New area of research known as AI for science (AI4Science)
Areas aim at understanding the physical world from subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales.
Key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods.
arXiv Detail & Related papers (2023-07-17T12:14:14Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Reliable AI: Does the Next Generation Require Quantum Computing? [71.84486326350338]
We show that digital hardware is inherently constrained in solving problems about optimization, deep learning, or differential equations.
In contrast, analog computing models, such as the Blum-Shub-Smale machine, exhibit the potential to surmount these limitations.
arXiv Detail & Related papers (2023-07-03T19:10:45Z) - Mathematics, word problems, common sense, and artificial intelligence [0.0]
We discuss the capacities and limitations of current artificial intelligence (AI) technology to solve word problems that combine elementary knowledge with commonsense reasoning.
We review three approaches that have been developed, using AI natural language technology.
We argue that it is not clear whether these kinds of limitations will be important in developing AI technology for pure mathematical research.
arXiv Detail & Related papers (2023-01-23T21:21:39Z) - Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with
Recurrent Networks [47.54459795966417]
We show that recurrent networks trained to solve simple problems can indeed solve much more complex problems simply by performing additional recurrences during inference.
In all three domains, networks trained on simple problem instances are able to extend their reasoning abilities at test time simply by "thinking for longer"
arXiv Detail & Related papers (2021-06-08T17:19:48Z) - Qualities, challenges and future of genetic algorithms: a literature
review [0.0]
Genetic algorithms are computer programs that simulate natural evolution.
They have been used to solve various optimisation problems from neural network architecture search to strategic games.
Recent developments such as GPU, parallel and quantum computing, conception of powerful parameter control methods, and novel approaches in representation strategies may be keys to overcome their limitations.
arXiv Detail & Related papers (2020-11-05T17:53:33Z) - When we can trust computers (and when we can't) [0.0]
In the domains of science and engineering that are relatively simple and firmly grounded in theory, these methods are indeed powerful.
The rise of big data and machine learning pose new challenges to computation, while lacking true explanatory power.
In the long-term, renewed emphasis on analogue methods will be necessary to temper the excessive faith currently placed in digital computation.
arXiv Detail & Related papers (2020-07-08T08:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.