Computational model discovery with reinforcement learning
- URL: http://arxiv.org/abs/2001.00008v1
- Date: Sun, 29 Dec 2019 22:56:40 GMT
- Title: Computational model discovery with reinforcement learning
- Authors: Maxime Bassenne and Adri\'an Lozano-Dur\'an
- Abstract summary: The motivation of this study is to leverage recent breakthroughs in artificial intelligence research to unlock novel solutions to scientific problems encountered in computational science.
To address the human intelligence limitations in discovering reduced-order models, we propose to supplement human thinking with artificial intelligence.
- Score: 3.005240085945858
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The motivation of this study is to leverage recent breakthroughs in
artificial intelligence research to unlock novel solutions to important
scientific problems encountered in computational science. To address the human
intelligence limitations in discovering reduced-order models, we propose to
supplement human thinking with artificial intelligence. Our three-pronged
strategy consists of learning (i) models expressed in analytical form, (ii)
which are evaluated a posteriori, and iii) using exclusively integral
quantities from the reference solution as prior knowledge. In point (i), we
pursue interpretable models expressed symbolically as opposed to black-box
neural networks, the latter only being used during learning to efficiently
parameterize the large search space of possible models. In point (ii), learned
models are dynamically evaluated a posteriori in the computational solver
instead of based on a priori information from preprocessed high-fidelity data,
thereby accounting for the specificity of the solver at hand such as its
numerics. Finally in point (iii), the exploration of new models is solely
guided by predefined integral quantities, e.g., averaged quantities of
engineering interest in Reynolds-averaged or large-eddy simulations (LES). We
use a coupled deep reinforcement learning framework and computational solver to
concurrently achieve these objectives. The combination of reinforcement
learning with objectives (i), (ii) and (iii) differentiate our work from
previous modeling attempts based on machine learning. In this report, we
provide a high-level description of the model discovery framework with
reinforcement learning. The method is detailed for the application of
discovering missing terms in differential equations. An elementary
instantiation of the method is described that discovers missing terms in the
Burgers' equation.
Related papers
- Discovering Physics-Informed Neural Networks Model for Solving Partial Differential Equations through Evolutionary Computation [5.8407437499182935]
This article proposes an evolutionary computation method aimed at discovering the PINNs model with higher approximation accuracy and faster convergence rate.
In experiments, the performance of different models that are searched through Bayesian optimization, random search and evolution is compared in solving Klein-Gordon, Burgers, and Lam'e equations.
arXiv Detail & Related papers (2024-05-18T07:32:02Z) - GLUECons: A Generic Benchmark for Learning Under Constraints [102.78051169725455]
In this work, we create a benchmark that is a collection of nine tasks in the domains of natural language processing and computer vision.
We model external knowledge as constraints, specify the sources of the constraints for each task, and implement various models that use these constraints.
arXiv Detail & Related papers (2023-02-16T16:45:36Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Stretched and measured neural predictions of complex network dynamics [2.1024950052120417]
Data-driven approximations of differential equations present a promising alternative to traditional methods for uncovering a model of dynamical systems.
A recently employed machine learning tool for studying dynamics is neural networks, which can be used for data-driven solution finding or discovery of differential equations.
We show that extending the model's generalizability beyond traditional statistical learning theory limits is feasible.
arXiv Detail & Related papers (2023-01-12T09:44:59Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Rethinking Bayesian Learning for Data Analysis: The Art of Prior and
Inference in Sparsity-Aware Modeling [20.296566563098057]
Sparse modeling for signal processing and machine learning has been at the focus of scientific research for over two decades.
This article reviews some recent advances in incorporating sparsity-promoting priors into three popular data modeling tools.
arXiv Detail & Related papers (2022-05-28T00:43:52Z) - Algebraic Learning: Towards Interpretable Information Modeling [0.0]
This thesis addresses the issue of interpretability in general information modeling and endeavors to ease the problem from two scopes.
Firstly, a problem-oriented perspective is applied to incorporate knowledge into modeling practice, where interesting mathematical properties emerge naturally.
Secondly, given a trained model, various methods could be applied to extract further insights about the underlying system.
arXiv Detail & Related papers (2022-03-13T15:53:39Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Ex-Model: Continual Learning from a Stream of Trained Models [12.27992745065497]
We argue that continual learning systems should exploit the availability of compressed information in the form of trained models.
We introduce and formalize a new paradigm named "Ex-Model Continual Learning" (ExML), where an agent learns from a sequence of previously trained models instead of raw data.
arXiv Detail & Related papers (2021-12-13T09:46:16Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.