Accelerating science with human versus alien artificial intelligences
- URL: http://arxiv.org/abs/2104.05188v1
- Date: Mon, 12 Apr 2021 03:50:30 GMT
- Title: Accelerating science with human versus alien artificial intelligences
- Authors: Jamshid Sourati, James Evans
- Abstract summary: We show that incorporating the distribution of human expertise into self-supervised models dramatically improves AI prediction of future human discoveries and inventions.
These models succeed by predicting human predictions and the scientists who will make them.
By tuning AI to avoid the crowd, however, it generates scientifically promising "alien" hypotheses unlikely to be imagined or pursued without intervention.
- Score: 3.6354412526174196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven artificial intelligence models fed with published scientific
findings have been used to create powerful prediction engines for scientific
and technological advance, such as the discovery of novel materials with
desired properties and the targeted invention of new therapies and vaccines.
These AI approaches typically ignore the distribution of human prediction
engines -- scientists and inventor -- who continuously alter the landscape of
discovery and invention. As a result, AI hypotheses are designed to substitute
for human experts, failing to complement them for punctuated collective
advance. Here we show that incorporating the distribution of human expertise
into self-supervised models by training on inferences cognitively available to
experts dramatically improves AI prediction of future human discoveries and
inventions. Including expert-awareness into models that propose (a) valuable
energy-relevant materials increases the precision of materials predictions by
~100%, (b) repurposing thousands of drugs to treat new diseases increases
precision by 43%, and (c) COVID-19 vaccine candidates examined in clinical
trials by 260%. These models succeed by predicting human predictions and the
scientists who will make them. By tuning AI to avoid the crowd, however, it
generates scientifically promising "alien" hypotheses unlikely to be imagined
or pursued without intervention, not only accelerating but punctuating
scientific advance. By identifying and correcting for collective human bias,
these models also suggest opportunities to improve human prediction by
reformulating science education for discovery.
Related papers
- Large language models surpass human experts in predicting neuroscience results [60.26891446026707]
Large language models (LLMs) forecast novel results better than human experts.
BrainBench is a benchmark for predicting neuroscience results.
Our approach is not neuroscience-specific and is transferable to other knowledge-intensive endeavors.
arXiv Detail & Related papers (2024-03-04T15:27:59Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - MedDiffusion: Boosting Health Risk Prediction via Diffusion-based Data
Augmentation [58.93221876843639]
This paper introduces a novel, end-to-end diffusion-based risk prediction model, named MedDiffusion.
It enhances risk prediction performance by creating synthetic patient data during training to enlarge sample space.
It discerns hidden relationships between patient visits using a step-wise attention mechanism, enabling the model to automatically retain the most vital information for generating high-quality data.
arXiv Detail & Related papers (2023-10-04T01:36:30Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Accelerating science with human-aware artificial intelligence [2.7786142348700658]
We show that incorporating the distribution of human expertise by training unsupervised models dramatically improves (up to 400%) AI prediction of future discoveries.
These models succeed by predicting human predictions and the scientists who will make them.
Accelerating human discovery or probing its blind spots, human-aware AI enables us to move toward and beyond the contemporary scientific frontier.
arXiv Detail & Related papers (2023-06-02T12:43:23Z) - BO-Muse: A human expert and AI teaming framework for accelerated
experimental design [58.61002520273518]
Our algorithm lets the human expert take the lead in the experimental process.
We show that our algorithm converges sub-linearly, at a rate faster than the AI or human alone.
arXiv Detail & Related papers (2023-03-03T02:56:05Z) - Self-mediated exploration in artificial intelligence inspired by
cognitive psychology [1.3351610617039975]
Exploration of the physical environment is an indispensable precursor to data acquisition and enables knowledge generation via analytical or direct trialing.
This work links human behavior and artificial agents to endorse self-development.
A study is subsequently designed to mirror previous human trials, which artificial agents are made to undergo repeatedly towards convergence.
Results demonstrate causality, learned by the vast majority of agents, between their internal states and exploration to match those reported for human counterparts.
arXiv Detail & Related papers (2023-02-13T18:20:44Z) - Complementary artificial intelligence designed to augment human
discovery [2.7786142348700658]
We reconceptualize and pilot beneficial AI to radically augment human understanding by complementing rather than competing with cognitive capacity.
We use this approach to generate valuable predictions for what materials possess valuable energy-related properties.
We demonstrate that our predictions, if identified by human scientists and inventors at all, are only discovered years further into the future.
arXiv Detail & Related papers (2022-07-02T19:36:34Z) - Learning from learning machines: a new generation of AI technology to
meet the needs of science [59.261050918992325]
We outline emerging opportunities and challenges to enhance the utility of AI for scientific discovery.
The distinct goals of AI for industry versus the goals of AI for science create tension between identifying patterns in data versus discovering patterns in the world from data.
arXiv Detail & Related papers (2021-11-27T00:55:21Z) - Enhancing Human-Machine Teaming for Medical Prognosis Through Neural
Ordinary Differential Equations (NODEs) [0.0]
A key barrier to the full realization of Machine Learning's potential in medical prognoses is technology acceptance.
Recent efforts to produce explainable AI (XAI) have made progress in improving the interpretability of some ML models.
We propose a novel ML architecture to enhance human understanding and encourage acceptability.
arXiv Detail & Related papers (2021-02-08T10:52:23Z) - Harnessing Explanations to Bridge AI and Humans [14.354362614416285]
Machine learning models are increasingly integrated into societally critical applications such as recidivism prediction and medical diagnosis.
We propose future directions for closing the gap between the efficacy of explanations and improvement in human performance.
arXiv Detail & Related papers (2020-03-16T18:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.