Progressive reduced order modeling: empowering data-driven modeling with
selective knowledge transfer
- URL: http://arxiv.org/abs/2310.03770v1
- Date: Wed, 4 Oct 2023 23:50:14 GMT
- Title: Progressive reduced order modeling: empowering data-driven modeling with
selective knowledge transfer
- Authors: Teeratorn Kadeethum, Daniel O'Malley, Youngsoo Choi, Hari S.
Viswanathan, Hongkyu Yoon
- Abstract summary: We propose a progressive reduced order modeling framework that minimizes data cravings and enhances data-driven modeling's practicality.
Our approach selectively transfers knowledge from previously trained models through gates, similar to how humans selectively use valuable knowledge while ignoring unuseful information.
We have tested our framework in several cases, including transport in porous media, gravity-driven flow, and finite deformation in hyperelastic materials.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven modeling can suffer from a constant demand for data, leading to
reduced accuracy and impractical for engineering applications due to the high
cost and scarcity of information. To address this challenge, we propose a
progressive reduced order modeling framework that minimizes data cravings and
enhances data-driven modeling's practicality. Our approach selectively
transfers knowledge from previously trained models through gates, similar to
how humans selectively use valuable knowledge while ignoring unuseful
information. By filtering relevant information from previous models, we can
create a surrogate model with minimal turnaround time and a smaller training
set that can still achieve high accuracy. We have tested our framework in
several cases, including transport in porous media, gravity-driven flow, and
finite deformation in hyperelastic materials. Our results illustrate that
retaining information from previous models and utilizing a valuable portion of
that knowledge can significantly improve the accuracy of the current model. We
have demonstrated the importance of progressive knowledge transfer and its
impact on model accuracy with reduced training samples. For instance, our
framework with four parent models outperforms the no-parent counterpart trained
on data nine times larger. Our research unlocks data-driven modeling's
potential for practical engineering applications by mitigating the data
scarcity issue. Our proposed framework is a significant step toward more
efficient and cost-effective data-driven modeling, fostering advancements
across various fields.
Related papers
- Learning-based Models for Vulnerability Detection: An Extensive Study [3.1317409221921144]
We extensively and comprehensively investigate two types of state-of-the-art learning-based approaches.
We experimentally demonstrate the priority of sequence-based models and the limited abilities of both graph-based models.
arXiv Detail & Related papers (2024-08-14T13:01:30Z) - Encapsulating Knowledge in One Prompt [56.31088116526825]
KiOP encapsulates knowledge from various models into a solitary prompt without altering the original models or requiring access to the training data.
From a practicality standpoint, this paradigm proves the effectiveness of Visual Prompt in data inaccessible contexts.
Experiments across various datasets and models demonstrate the efficacy of the proposed KiOP knowledge transfer paradigm.
arXiv Detail & Related papers (2024-07-16T16:35:23Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Data Quality Aware Approaches for Addressing Model Drift of Semantic
Segmentation Models [1.6385815610837167]
This study investigates two prominent quality aware strategies to combat model drift.
The former leverages image quality assessment metrics to meticulously select high-quality training data, improving the model robustness.
The latter makes use of learned vectors feature from existing models to guide the selection of future data, aligning it with the model's prior knowledge.
arXiv Detail & Related papers (2024-02-11T18:01:52Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - Towards Efficient Task-Driven Model Reprogramming with Foundation Models [52.411508216448716]
Vision foundation models exhibit impressive power, benefiting from the extremely large model capacity and broad training data.
However, in practice, downstream scenarios may only support a small model due to the limited computational resources or efficiency considerations.
This brings a critical challenge for the real-world application of foundation models: one has to transfer the knowledge of a foundation model to the downstream task.
arXiv Detail & Related papers (2023-04-05T07:28:33Z) - A Physics-informed Diffusion Model for High-fidelity Flow Field
Reconstruction [0.0]
We propose a diffusion model which only uses high-fidelity data at training.
With different configurations, our model is able to reconstruct high-fidelity data from either a regular low-fidelity sample or a sparsely measured sample.
Our model can produce accurate reconstruction results for 2d turbulent flows based on different input sources without retraining.
arXiv Detail & Related papers (2022-11-26T23:14:18Z) - CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal
Relationships [8.679073301435265]
We construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.
We use these labels to perturb the data by deleting non-causal agents from the scene.
Under non-causal perturbations, we observe a $25$-$38%$ relative change in minADE as compared to the original.
arXiv Detail & Related papers (2022-07-07T21:28:23Z) - Knowledge-Guided Dynamic Systems Modeling: A Case Study on Modeling
River Water Quality [8.110949636804774]
Modeling real-world phenomena is a focus of many science and engineering efforts, such as ecological modeling and financial forecasting.
Building an accurate model for complex and dynamic systems improves understanding of underlying processes and leads to resource efficiency.
At the opposite extreme, data-driven modeling learns a model directly from data, requiring extensive data and potentially generating overfitting.
We focus on an intermediate approach, model revision, in which prior knowledge and data are combined to achieve the best of both worlds.
arXiv Detail & Related papers (2021-03-01T06:31:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.