Predicting Real-time Scientific Experiments Using Transformer models and
Reinforcement Learning
- URL: http://arxiv.org/abs/2204.11718v1
- Date: Mon, 25 Apr 2022 15:19:25 GMT
- Title: Predicting Real-time Scientific Experiments Using Transformer models and
Reinforcement Learning
- Authors: Juan Manuel Parrilla-Gutierrez
- Abstract summary: We present an encoder-decoder architecture based on the Transformer model to simulate real-time scientific experimentation.
As a proof of concept, this architecture was trained to map a set of mechanical inputs to the oscillations generated by a chemical reaction.
Our results demonstrate how generative learning can model real-time scientific experimentation to track how it changes through time as the user manipulates it.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Life and physical sciences have always been quick to adopt the latest
advances in machine learning to accelerate scientific discovery. Examples of
this are cell segmentation or cancer detection. Nevertheless, these exceptional
results are based on mining previously created datasets to discover patterns or
trends. Recent advances in AI have been demonstrated in real-time scenarios
like self-driving cars or playing video games. However, these new techniques
have not seen widespread adoption in life or physical sciences because
experimentation can be slow. To tackle this limitation, this work aims to adapt
generative learning algorithms to model scientific experiments and accelerate
their discovery using in-silico simulations. We particularly focused on
real-time experiments, aiming to model how they react to user inputs. To
achieve this, here we present an encoder-decoder architecture based on the
Transformer model to simulate real-time scientific experimentation, predict its
future behaviour and manipulate it on a step-by-step basis. As a proof of
concept, this architecture was trained to map a set of mechanical inputs to the
oscillations generated by a chemical reaction. The model was paired with a
Reinforcement Learning controller to show how the simulated chemistry can be
manipulated in real-time towards user-defined behaviours. Our results
demonstrate how generative learning can model real-time scientific
experimentation to track how it changes through time as the user manipulates
it, and how the trained models can be paired with optimisation algorithms to
discover new phenomena beyond the physical limitations of lab experimentation.
This work paves the way towards building surrogate systems where physical
experimentation interacts with machine learning on a step-by-step basis.
Related papers
- AutoSciLab: A Self-Driving Laboratory For Interpretable Scientific Discovery [1.1740681158785793]
AutoSciLab is a machine learning framework for driving autonomous scientific experiments.
It forms a surrogate researcher purposed for scientific discovery in high-dimensional spaces.
Applying our framework to an open-ended nanophotonics challenge, AutoSciLab uncovers a fundamentally novel method for directing incoherent light emission.
arXiv Detail & Related papers (2024-12-16T20:41:46Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Simulation-Based Parallel Training [55.41644538483948]
We present our ongoing work to design a training framework that alleviates those bottlenecks.
It generates data in parallel with the training process.
We present a strategy to mitigate this bias with a memory buffer.
arXiv Detail & Related papers (2022-11-08T09:31:25Z) - Towards Learned Simulators for Cell Migration [2.5331228143087565]
A neural simulator for cellular dynamics can augment lab experiments and traditional methods to enhance our understanding of a cell's interaction with its physical environment.
We propose an autoregressive probabilistic model that can reproduce dynamics of single cell migration.
We observe that standard single-step training methods do not only lead to inconsistent stability, but also fail to accurately capture the aspects of the dynamics.
arXiv Detail & Related papers (2022-10-02T14:01:09Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Robot Learning from Randomized Simulations: A Review [59.992761565399185]
Deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.
State-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive.
We focus on a technique named 'domain randomization' which is a method for learning from randomized simulations.
arXiv Detail & Related papers (2021-11-01T13:55:41Z) - New Trends in Quantum Machine Learning [0.0]
We will explore the ways in which machine learning could benefit from new quantum technologies and algorithms.
Data visualization techniques and other schemes borrowed from machine learning can be of great use to theoreticians.
arXiv Detail & Related papers (2021-08-22T08:23:30Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - Scientific intuition inspired by machine learning generated hypotheses [2.294014185517203]
We shift the focus on the insights and the knowledge obtained by the machine learning models themselves.
We apply gradient boosting in decision trees to extract human interpretable insights from big data sets from chemistry and physics.
The ability to go beyond numerics opens the door to use machine learning to accelerate the discovery of conceptual understanding.
arXiv Detail & Related papers (2020-10-27T12:12:12Z) - Building high accuracy emulators for scientific simulations with deep
neural architecture search [0.0]
A promising route to accelerate simulations by building fast emulators with machine learning requires large training datasets.
Here we present a method based on neural architecture search to build accurate emulators even with a limited number of training data.
The method successfully accelerates simulations by up to 2 billion times in 10 scientific cases including astrophysics, climate science, biogeochemistry, high energy density physics, fusion energy, and seismology.
arXiv Detail & Related papers (2020-01-17T22:14:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.