Learning One Abstract Bit at a Time Through Self-Invented Experiments
Encoded as Neural Networks
- URL: http://arxiv.org/abs/2212.14374v1
- Date: Thu, 29 Dec 2022 17:11:49 GMT
- Title: Learning One Abstract Bit at a Time Through Self-Invented Experiments
Encoded as Neural Networks
- Authors: Vincent Herrmann, Louis Kirsch, J\"urgen Schmidhuber
- Abstract summary: We present an empirical analysis of the automatic generation of interesting experiments.
In the first setting, we investigate self-invented experiments in a reinforcement-providing environment.
In the second setting, pure thought experiments are implemented as the weights of recurrent neural networks.
- Score: 8.594140167290098
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There are two important things in science: (A) Finding answers to given
questions, and (B) Coming up with good questions. Our artificial scientists not
only learn to answer given questions, but also continually invent new
questions, by proposing hypotheses to be verified or falsified through
potentially complex and time-consuming experiments, including thought
experiments akin to those of mathematicians. While an artificial scientist
expands its knowledge, it remains biased towards the simplest, least costly
experiments that still have surprising outcomes, until they become boring. We
present an empirical analysis of the automatic generation of interesting
experiments. In the first setting, we investigate self-invented experiments in
a reinforcement-providing environment and show that they lead to effective
exploration. In the second setting, pure thought experiments are implemented as
the weights of recurrent neural networks generated by a neural experiment
generator. Initially interesting thought experiments may become boring over
time.
Related papers
- LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery [141.39722070734737]
We propose to enhance the knowledge-driven, abstract reasoning abilities of Large Language Models with the computational strength of simulations.
We introduce Scientific Generative Agent (SGA), a bilevel optimization framework.
We conduct experiments to demonstrate our framework's efficacy in law discovery and molecular design.
arXiv Detail & Related papers (2024-05-16T03:04:10Z) - Content and structure of laboratory packages for software engineering
experiments [1.3584003182788122]
This paper investigates the experiment replication process to find out what information is needed to successfully replicate an experiment.
Our objective is to propose the content and structure of laboratory packages for software engineering experiments.
arXiv Detail & Related papers (2024-02-11T14:29:15Z) - Large Language Models for Automated Open-domain Scientific Hypotheses Discovery [50.40483334131271]
This work proposes the first dataset for social science academic hypotheses discovery.
Unlike previous settings, the new dataset requires (1) using open-domain data (raw web corpus) as observations; and (2) proposing hypotheses even new to humanity.
A multi- module framework is developed for the task, including three different feedback mechanisms to boost performance.
arXiv Detail & Related papers (2023-09-06T05:19:41Z) - GFlowNets for AI-Driven Scientific Discovery [74.27219800878304]
We present a new probabilistic machine learning framework called GFlowNets.
GFlowNets can be applied in the modeling, hypotheses generation and experimental design stages of the experimental science loop.
We argue that GFlowNets can become a valuable tool for AI-driven scientific discovery.
arXiv Detail & Related papers (2023-02-01T17:29:43Z) - PyExperimenter: Easily distribute experiments and track results [63.871474825689134]
PyExperimenter is a tool to facilitate the setup, documentation, execution, and subsequent evaluation of results from an empirical study of algorithms.
It is intended to be used by researchers in the field of artificial intelligence, but is not limited to those.
arXiv Detail & Related papers (2023-01-16T10:43:02Z) - Observing Interventions: A logic for thinking about experiments [62.997667081978825]
This paper makes a first step towards a logic of learning from experiments.
Crucial for our approach is the idea that the notion of an intervention can be used as a formal expression of a (real or hypothetical) experiment.
For all the proposed logical systems, we provide a sound and complete axiomatization.
arXiv Detail & Related papers (2021-11-25T09:26:45Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Scientific intuition inspired by machine learning generated hypotheses [2.294014185517203]
We shift the focus on the insights and the knowledge obtained by the machine learning models themselves.
We apply gradient boosting in decision trees to extract human interpretable insights from big data sets from chemistry and physics.
The ability to go beyond numerics opens the door to use machine learning to accelerate the discovery of conceptual understanding.
arXiv Detail & Related papers (2020-10-27T12:12:12Z) - Autonomous Materials Discovery Driven by Gaussian Process Regression
with Inhomogeneous Measurement Noise and Anisotropic Kernels [1.976226676686868]
A majority of experimental disciplines face the challenge of exploring large and high-dimensional parameter spaces in search of new scientific discoveries.
Recent advances have led to an increase in efficiency of materials discovery by increasingly automating the exploration processes.
Gamma process regression (GPR) techniques have emerged as the method of choice for steering many classes of experiments.
arXiv Detail & Related papers (2020-06-03T19:18:47Z) - Optimal Learning for Sequential Decisions in Laboratory Experimentation [0.0]
This tutorial is aimed to provide experimental scientists with a foundation in the science of making decisions.
We introduce the concept of a learning policy, and review the major categories of policies.
We then introduce a policy, known as the knowledge gradient, that maximizes the value of information from each experiment.
arXiv Detail & Related papers (2020-04-11T14:53:29Z) - Computer-inspired Quantum Experiments [1.2891210250935146]
In many disciplines, computer-inspired design processes, also known as inverse-design, have augmented the capability of scientists.
We will meet vastly diverse computational approaches based on topological optimization, evolutionary strategies, deep learning, reinforcement learning or automated reasoning.
arXiv Detail & Related papers (2020-02-23T18:59:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.