Active-learning-based non-intrusive Model Order Reduction
- URL: http://arxiv.org/abs/2204.08523v1
- Date: Fri, 8 Apr 2022 22:33:51 GMT
- Title: Active-learning-based non-intrusive Model Order Reduction
- Authors: Qinyu Zhuang, Dirk Hartmann, Hans Joachim Bungartz, Juan Manuel
Lorenzi
- Abstract summary: In this work, we propose a new active learning approach with two novelties.
A novel idea with our approach is the use of single-time step snapshots from the system states taken from an estimation of the reduced-state space.
We also introduce a use case-independent validation strategy based on Probably Approximately Correct (PAC) learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Model Order Reduction (MOR) technique can provide compact numerical
models for fast simulation. Different from the intrusive MOR methods, the
non-intrusive MOR does not require access to the Full Order Models (FOMs),
especially system matrices. Since the non-intrusive MOR methods strongly rely
on the snapshots of the FOMs, constructing good snapshot sets becomes crucial.
In this work, we propose a new active learning approach with two novelties. A
novel idea with our approach is the use of single-time step snapshots from the
system states taken from an estimation of the reduced-state space. These states
are selected using a greedy strategy supported by an error estimator based
Gaussian Process Regression (GPR). Additionally, we introduce a use
case-independent validation strategy based on Probably Approximately Correct
(PAC) learning. In this work, we use Artificial Neural Networks (ANNs) to
identify the Reduced Order Model (ROM), however the method could be similarly
applied to other ROM identification methods. The performance of the whole
workflow is tested by a 2-D thermal conduction and a 3-D vacuum furnace model.
With little required user interaction and a training strategy independent to a
specific use case, the proposed method offers a huge potential for industrial
usage to create so-called executable Digital Twins (DTs).
Related papers
- Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - Adv-KD: Adversarial Knowledge Distillation for Faster Diffusion Sampling [2.91204440475204]
Diffusion Probabilistic Models (DPMs) have emerged as a powerful class of deep generative models.
They rely on sequential denoising steps during sample generation.
We propose a novel method that integrates denoising phases directly into the model's architecture.
arXiv Detail & Related papers (2024-05-31T08:19:44Z) - Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - Improving Probabilistic Bisimulation for MDPs Using Machine Learning [0.0]
We propose a new technique to partition the state space of a given model to its probabilistic bisimulation classes.
The approach can decrease significantly the running time compared to state-of-the-art tools.
arXiv Detail & Related papers (2023-07-30T12:58:12Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Model Predictive Control with Self-supervised Representation Learning [13.225264876433528]
We propose the use of a reconstruction function within the TD-MPC framework, so that the agent can reconstruct the original observation.
Our proposed addition of another loss term leads to improved performance on both state- and image-based tasks.
arXiv Detail & Related papers (2023-04-14T16:02:04Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - FOSTER: Feature Boosting and Compression for Class-Incremental Learning [52.603520403933985]
Deep neural networks suffer from catastrophic forgetting when learning new categories.
We propose a novel two-stage learning paradigm FOSTER, empowering the model to learn new categories adaptively.
arXiv Detail & Related papers (2022-04-10T11:38:33Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Model Order Reduction based on Runge-Kutta Neural Network [0.0]
In this work, we apply some modifications for both steps respectively and investigate how they are impacted by testing with three simulation models.
For the model reconstruction step, two types of neural network architectures are compared: Multilayer Perceptron (MLP) and Runge-Kutta Neural Network (RKNN)
arXiv Detail & Related papers (2021-03-25T13:02:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.