Complementary artificial intelligence designed to augment human
discovery
- URL: http://arxiv.org/abs/2207.00902v1
- Date: Sat, 2 Jul 2022 19:36:34 GMT
- Title: Complementary artificial intelligence designed to augment human
discovery
- Authors: Jamshid Sourati, James Evans
- Abstract summary: We reconceptualize and pilot beneficial AI to radically augment human understanding by complementing rather than competing with cognitive capacity.
We use this approach to generate valuable predictions for what materials possess valuable energy-related properties.
We demonstrate that our predictions, if identified by human scientists and inventors at all, are only discovered years further into the future.
- Score: 2.7786142348700658
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neither artificial intelligence designed to play Turing's imitation game, nor
augmented intelligence built to maximize the human manipulation of information
are tuned to accelerate innovation and improve humanity's collective advance
against its greatest challenges. We reconceptualize and pilot beneficial AI to
radically augment human understanding by complementing rather than competing
with human cognitive capacity. Our approach to complementary intelligence
builds on insights underlying the wisdom of crowds, which hinges on the
independence and diversity of crowd members' information and approach. By
programmatically incorporating information on the evolving distribution of
scientific expertise from research papers, our approach follows the
distribution of content in the literature while avoiding the scientific crowd
and the hypotheses cognitively available to it. We use this approach to
generate valuable predictions for what materials possess valuable
energy-related properties (e.g., thermoelectricity), and what compounds possess
valuable medical properties (e.g., asthma) that complement the human scientific
crowd. We demonstrate that our complementary predictions, if identified by
human scientists and inventors at all, are only discovered years further into
the future. When we evaluate the promise of our predictions with
first-principles equations, we demonstrate that increased complementarity of
our predictions does not decrease and in some cases increases the probability
that the predictions possess the targeted properties. In summary, by tuning AI
to avoid the crowd, we can generate hypotheses unlikely to be imagined or
pursued until the distant future and promise to punctuate scientific advance.
By identifying and correcting for collective human bias, these models also
suggest opportunities to improve human prediction by reformulating science
education for discovery.
Related papers
- Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI [6.8894258727040665]
We explore the interplay between human and machine intelligence, focusing on the crucial role humans play in developing ethical, responsible, and robust intelligent systems.
We propose future perspectives, capitalizing on the advantages of symbiotic designs to suggest a human-centered direction for next-generation AI development.
arXiv Detail & Related papers (2024-09-24T12:02:20Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Exploration with Principles for Diverse AI Supervision [88.61687950039662]
Training large transformers using next-token prediction has given rise to groundbreaking advancements in AI.
While this generative AI approach has produced impressive results, it heavily leans on human supervision.
This strong reliance on human oversight poses a significant hurdle to the advancement of AI innovation.
We propose a novel paradigm termed Exploratory AI (EAI) aimed at autonomously generating high-quality training data.
arXiv Detail & Related papers (2023-10-13T07:03:39Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Accelerating science with human-aware artificial intelligence [2.7786142348700658]
We show that incorporating the distribution of human expertise by training unsupervised models dramatically improves (up to 400%) AI prediction of future discoveries.
These models succeed by predicting human predictions and the scientists who will make them.
Accelerating human discovery or probing its blind spots, human-aware AI enables us to move toward and beyond the contemporary scientific frontier.
arXiv Detail & Related papers (2023-06-02T12:43:23Z) - Learning from learning machines: a new generation of AI technology to
meet the needs of science [59.261050918992325]
We outline emerging opportunities and challenges to enhance the utility of AI for scientific discovery.
The distinct goals of AI for industry versus the goals of AI for science create tension between identifying patterns in data versus discovering patterns in the world from data.
arXiv Detail & Related papers (2021-11-27T00:55:21Z) - Accelerating science with human versus alien artificial intelligences [3.6354412526174196]
We show that incorporating the distribution of human expertise into self-supervised models dramatically improves AI prediction of future human discoveries and inventions.
These models succeed by predicting human predictions and the scientists who will make them.
By tuning AI to avoid the crowd, however, it generates scientifically promising "alien" hypotheses unlikely to be imagined or pursued without intervention.
arXiv Detail & Related papers (2021-04-12T03:50:30Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Drug discovery with explainable artificial intelligence [0.0]
There is a demand for 'explainable' deep learning methods to address the need for a new narrative of the machine language of the molecular sciences.
This review summarizes the most prominent algorithmic concepts of explainable artificial intelligence, and dares a forecast of the future opportunities, potential applications, and remaining challenges.
arXiv Detail & Related papers (2020-07-01T14:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.