Data efficient surrogate modeling for engineering design: Ensemble-free
batch mode deep active learning for regression
- URL: http://arxiv.org/abs/2211.10360v1
- Date: Wed, 16 Nov 2022 02:31:57 GMT
- Title: Data efficient surrogate modeling for engineering design: Ensemble-free
batch mode deep active learning for regression
- Authors: Harsh Vardhan, Umesh Timalsina, Peter Volgyesi, Janos Sztipanovits
- Abstract summary: We propose a simple and scalable approach for active learning that works in a student-teacher manner to train a surrogate model.
By using this proposed approach, we are able to achieve the same level of surrogate accuracy as the other baselines like DBAL and Monte Carlo sampling.
- Score: 0.6021787236982659
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In a computer-aided engineering design optimization problem that involves
notoriously complex and time-consuming simulator, the prevalent approach is to
replace these simulations with a data-driven surrogate that approximates the
simulator's behavior at a much cheaper cost. The main challenge in creating an
inexpensive data-driven surrogate is the generation of a sheer number of data
using these computationally expensive numerical simulations. In such cases,
Active Learning (AL) methods have been used that attempt to learn an
input--output behavior while labeling the fewest samples possible. The current
trend in AL for a regression problem is dominated by the Bayesian framework
that needs training an ensemble of learning models that makes surrogate
training computationally tedious if the underlying learning model is Deep
Neural Networks (DNNs). However, DNNs have an excellent capability to learn
highly nonlinear and complex relationships even for a very high dimensional
problem. To leverage the excellent learning capability of deep networks along
with avoiding the computational complexity of the Bayesian paradigm, in this
work we propose a simple and scalable approach for active learning that works
in a student-teacher manner to train a surrogate model. By using this proposed
approach, we are able to achieve the same level of surrogate accuracy as the
other baselines like DBAL and Monte Carlo sampling with up to 40 % fewer
samples. We empirically evaluated this method on multiple use cases including
three different engineering design domains:finite element analysis,
computational fluid dynamics, and propeller design.
Related papers
- Meta-Learning for Airflow Simulations with Graph Neural Networks [3.52359746858894]
We present a meta-learning approach to enhance the performance of learned models on out-of-distribution (OoD) samples.
Specifically, we set the airflow simulation in CFD over various airfoils as a meta-learning problem, where each set of examples defined on a single airfoil shape is treated as a separate task.
We experimentally demonstrate the efficiency of the proposed approach for improving the OoD generalization performance of learned models.
arXiv Detail & Related papers (2023-06-18T19:25:13Z) - Hindsight States: Blending Sim and Real Task Elements for Efficient
Reinforcement Learning [61.3506230781327]
In robotics, one approach to generate training data builds on simulations based on dynamics models derived from first principles.
Here, we leverage the imbalance in complexity of the dynamics to learn more sample-efficiently.
We validate our method on several challenging simulated tasks and demonstrate that it improves learning both alone and when combined with an existing hindsight algorithm.
arXiv Detail & Related papers (2023-03-03T21:55:04Z) - Towards Robust Dataset Learning [90.2590325441068]
We propose a principled, tri-level optimization to formulate the robust dataset learning problem.
Under an abstraction model that characterizes robust vs. non-robust features, the proposed method provably learns a robust dataset.
arXiv Detail & Related papers (2022-11-19T17:06:10Z) - Simulation-Based Parallel Training [55.41644538483948]
We present our ongoing work to design a training framework that alleviates those bottlenecks.
It generates data in parallel with the training process.
We present a strategy to mitigate this bias with a memory buffer.
arXiv Detail & Related papers (2022-11-08T09:31:25Z) - DeepAL for Regression Using $\epsilon$-weighted Hybrid Query Strategy [0.799536002595393]
We propose a novel sampling technique by combining the active learning (AL) method with Deep Learning (DL)
We call this method $epsilon$-weighted hybrid query strategy ($epsilon$-HQS).
During the empirical evaluation, better accuracy of the surrogate was observed in comparison to other methods of sample selection.
arXiv Detail & Related papers (2022-06-24T14:38:05Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Deep Learning-based FEA surrogate for sub-sea pressure vessel [0.799536002595393]
A pressure vessel contains electronics, power sources, and other sensors that can not be flooded.
A traditional design approach for a pressure vessel design involves running multiple Finite Element Analysis (FEA) based simulations.
Running these FEAs are computationally very costly for any optimization process.
A better approach is the surrogate design with the goal of replacing FEA-based prediction with some learning-based regressor.
arXiv Detail & Related papers (2022-06-06T00:47:10Z) - Use of Multifidelity Training Data and Transfer Learning for Efficient
Construction of Subsurface Flow Surrogate Models [0.0]
To construct data-driven surrogate models, several thousand high-fidelity simulation runs may be required to provide training samples.
We present a framework where most of the training simulations are performed on coarsened geomodels.
The network provides results that are significantly more accurate than the low-fidelity simulations used for most of the training.
arXiv Detail & Related papers (2022-04-23T20:09:49Z) - A Step Towards Efficient Evaluation of Complex Perception Tasks in
Simulation [5.4954641673299145]
We propose an approach that enables efficient large-scale testing using simplified low-fidelity simulators.
Our approach relies on designing an efficient surrogate model corresponding to the compute intensive components of the task under test.
We demonstrate the efficacy of our methodology by evaluating the performance of an autonomous driving task in the Carla simulator with reduced computational expense.
arXiv Detail & Related papers (2021-09-28T13:50:21Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.