An Efficient Hierarchical Kriging Modeling Method for High-dimension
Multi-fidelity Problems
- URL: http://arxiv.org/abs/2301.00216v1
- Date: Sat, 31 Dec 2022 15:17:07 GMT
- Title: An Efficient Hierarchical Kriging Modeling Method for High-dimension
Multi-fidelity Problems
- Authors: Youwei He, Jinliang Luo
- Abstract summary: Multi-fidelity Kriging model is a promising technique in surrogate-based design.
The cost for building a multi-fidelity Kriging model increases significantly with the increase of the problem dimension.
An efficient Hierarchical Kriging modeling method is proposed to attack this issue.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-fidelity Kriging model is a promising technique in surrogate-based
design as it can balance the model accuracy and cost of sample preparation by
fusing low- and high-fidelity data. However, the cost for building a
multi-fidelity Kriging model increases significantly with the increase of the
problem dimension. To attack this issue, an efficient Hierarchical Kriging
modeling method is proposed. In building the low-fidelity model, the maximal
information coefficient is utilized to calculate the relative value of the
hyperparameter. With this, the maximum likelihood estimation problem for
determining the hyperparameters is transformed as a one-dimension optimization
problem, which can be solved in an efficient manner and thus improve the
modeling efficiency significantly. A local search is involved further to
exploit the search space of hyperparameters to improve the model accuracy. The
high-fidelity model is built in a similar manner with the hyperparameter of the
low-fidelity model served as the relative value of the hyperparameter for
high-fidelity model. The performance of the proposed method is compared with
the conventional tuning strategy, by testing them over ten analytic problems
and an engineering problem of modeling the isentropic efficiency of a
compressor rotor. The empirical results demonstrate that the modeling time of
the proposed method is reduced significantly without sacrificing the model
accuracy. For the modeling of the isentropic efficiency of the compressor
rotor, the cost saving associated with the proposed method is about 90%
compared with the conventional strategy. Meanwhile, the proposed method
achieves higher accuracy.
Related papers
- Streamlining Ocean Dynamics Modeling with Fourier Neural Operators: A Multiobjective Hyperparameter and Architecture Optimization Approach [5.232806761554172]
We use the advanced search algorithms for multiobjective optimization in DeepHyper to streamline the development of neural networks tailored for ocean modeling.
We demonstrate an approach to enhance the use of FNOs in ocean dynamics forecasting, offering a scalable solution with improved precision.
arXiv Detail & Related papers (2024-04-07T14:29:23Z) - Sine Activated Low-Rank Matrices for Parameter Efficient Learning [25.12262017296922]
We propose a novel theoretical framework that integrates a sinusoidal function within the low-rank decomposition process.
Our method proves to be an enhancement for existing low-rank models, as evidenced by its successful application in Vision Transformers (ViT), Large Language Models (LLMs), Neural Radiance Fields (NeRF)
arXiv Detail & Related papers (2024-03-28T08:58:20Z) - Multi-fidelity reduced-order surrogate modeling [5.346062841242067]
We present a new data-driven strategy that combines dimensionality reduction with multi-fidelity neural network surrogates.
We show that the onset of instabilities and transients are well captured by this surrogate technique.
arXiv Detail & Related papers (2023-09-01T08:16:53Z) - E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning [55.50908600818483]
Fine-tuning large-scale pretrained vision models for new tasks has become increasingly parameter-intensive.
We propose an Effective and Efficient Visual Prompt Tuning (E2VPT) approach for large-scale transformer-based model adaptation.
Our approach outperforms several state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2023-07-25T19:03:21Z) - Understanding Parameter Sharing in Transformers [53.75988363281843]
Previous work on Transformers has focused on sharing parameters in different layers, which can improve the performance of models with limited parameters by increasing model depth.
We show that the success of this approach can be largely attributed to better convergence, with only a small part due to the increased model complexity.
Experiments on 8 machine translation tasks show that our model achieves competitive performance with only half the model complexity of parameter sharing models.
arXiv Detail & Related papers (2023-06-15T10:48:59Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Sliced gradient-enhanced Kriging for high-dimensional function
approximation [2.8228516010000617]
Gradient-enhanced Kriging (GE-Kriging) is a well-established surrogate modelling technique for approximating expensive computational models.
It tends to get impractical for high-dimensional problems due to the size of the inherent correlation matrix.
A new method, called sliced GE-Kriging (SGE-Kriging), is developed in this paper for reducing the size of the correlation matrix.
The results show that the SGE-Kriging model features an accuracy and robustness that is comparable to the standard one but comes at much less training costs.
arXiv Detail & Related papers (2022-04-05T07:27:14Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z) - Hybrid modeling: Applications in real-time diagnosis [64.5040763067757]
We outline a novel hybrid modeling approach that combines machine learning inspired models and physics-based models.
We are using such models for real-time diagnosis applications.
arXiv Detail & Related papers (2020-03-04T00:44:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.