GAR: Generalized Autoregression for Multi-Fidelity Fusion
- URL: http://arxiv.org/abs/2301.05729v1
- Date: Fri, 13 Jan 2023 19:10:25 GMT
- Title: GAR: Generalized Autoregression for Multi-Fidelity Fusion
- Authors: Yuxin Wang, Zheng Xing, Wei W. Xing
- Abstract summary: Generalized autoregression (GAR) is proposed to combine the results of low-fidelity (fast but inaccurate) and high-fidelity (slow but accurate) simulations.
Gar can deal with arbitrary dimensional outputs and arbitrary multifidelity data structure to satisfy the demand of multi-fidelity fusion.
Gar consistently outperforms the SOTA methods with a large margin (up to 6x improvement in RMSE) with only a couple high-fidelity training samples.
- Score: 16.464126364802283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many scientific research and engineering applications where repeated
simulations of complex systems are conducted, a surrogate is commonly adopted
to quickly estimate the whole system. To reduce the expensive cost of
generating training examples, it has become a promising approach to combine the
results of low-fidelity (fast but inaccurate) and high-fidelity (slow but
accurate) simulations. Despite the fast developments of multi-fidelity fusion
techniques, most existing methods require particular data structures and do not
scale well to high-dimensional output. To resolve these issues, we generalize
the classic autoregression (AR), which is wildly used due to its simplicity,
robustness, accuracy, and tractability, and propose generalized autoregression
(GAR) using tensor formulation and latent features. GAR can deal with arbitrary
dimensional outputs and arbitrary multifidelity data structure to satisfy the
demand of multi-fidelity fusion for complex problems; it admits a fully
tractable likelihood and posterior requiring no approximate inference and
scales well to high-dimensional problems. Furthermore, we prove the
autokrigeability theorem based on GAR in the multi-fidelity case and develop
CIGAR, a simplified GAR with the exact predictive mean accuracy with
computation reduction by a factor of d 3, where d is the dimensionality of the
output. The empirical assessment includes many canonical PDEs and real
scientific examples and demonstrates that the proposed method consistently
outperforms the SOTA methods with a large margin (up to 6x improvement in RMSE)
with only a couple high-fidelity training samples.
Related papers
- Adaptive Sampled Softmax with Inverted Multi-Index: Methods, Theory and Applications [79.53938312089308]
The MIDX-Sampler is a novel adaptive sampling strategy based on an inverted multi-index approach.
Our method is backed by rigorous theoretical analysis, addressing key concerns such as sampling bias, gradient bias, convergence rates, and generalization error bounds.
arXiv Detail & Related papers (2025-01-15T04:09:21Z) - Federated Smoothing Proximal Gradient for Quantile Regression with Non-Convex Penalties [3.269165283595478]
Distributed sensors in the internet-of-things (IoT) generate vast amounts of sparse data.
We propose a federated smoothing proximal gradient (G) algorithm that integrates a smoothing mechanism with the view, thereby both precision and computational speed.
arXiv Detail & Related papers (2024-08-10T21:50:19Z) - Robust Capped lp-Norm Support Vector Ordinal Regression [85.84718111830752]
Ordinal regression is a specialized supervised problem where the labels show an inherent order.
Support Vector Ordinal Regression, as an outstanding ordinal regression model, is widely used in many ordinal regression tasks.
We introduce a new model, Capped $ell_p$-Norm Support Vector Ordinal Regression(CSVOR), that is robust to outliers.
arXiv Detail & Related papers (2024-04-25T13:56:05Z) - Multifidelity Surrogate Models: A New Data Fusion Perspective [0.0]
Multifidelity surrogate modelling combines data of varying accuracy and cost from different sources.
It strategically uses low-fidelity models for rapid evaluations, saving computational resources.
It improves decision-making by addressing uncertainties and surpassing the limits of single-fidelity models.
arXiv Detail & Related papers (2024-04-21T11:21:47Z) - Federated Latent Class Regression for Hierarchical Data [5.110894308882439]
Federated Learning (FL) allows a number of agents to participate in training a global machine learning model without disclosing locally stored data.
We propose a novel probabilistic model, Hierarchical Latent Class Regression (HLCR), and its extension to Federated Learning, FEDHLCR.
Our inference algorithm, being derived from Bayesian theory, provides strong convergence guarantees and good robustness to overfitting. Experimental results show that FEDHLCR offers fast convergence even in non-IID datasets.
arXiv Detail & Related papers (2022-06-22T00:33:04Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - RMFGP: Rotated Multi-fidelity Gaussian process with Dimension Reduction
for High-dimensional Uncertainty Quantification [12.826754199680474]
Multi-fidelity modelling enables accurate inference even when only a small set of accurate data is available.
By combining the realizations of the high-fidelity model with one or more low-fidelity models, the multi-fidelity method can make accurate predictions of quantities of interest.
This paper proposes a new dimension reduction framework based on rotated multi-fidelity Gaussian process regression and a Bayesian active learning scheme.
arXiv Detail & Related papers (2022-04-11T01:20:35Z) - Generalizable Mixed-Precision Quantization via Attribution Rank
Preservation [90.26603048354575]
We propose a generalizable mixed-precision quantization (GMPQ) method for efficient inference.
Our method obtains competitive accuracy-complexity trade-off compared with the state-of-the-art mixed-precision networks.
arXiv Detail & Related papers (2021-08-05T16:41:57Z) - A general sample complexity analysis of vanilla policy gradient [101.16957584135767]
Policy gradient (PG) is one of the most popular reinforcement learning (RL) problems.
"vanilla" theoretical understanding of PG trajectory is one of the most popular methods for solving RL problems.
arXiv Detail & Related papers (2021-07-23T19:38:17Z) - Multi-fidelity regression using artificial neural networks: efficient
approximation of parameter-dependent output quantities [0.17499351967216337]
We present the use of artificial neural networks applied to multi-fidelity regression problems.
The introduced models are compared against a traditional multi-fidelity scheme, co-kriging.
We also show an application of multi-fidelity regression to an engineering problem.
arXiv Detail & Related papers (2021-02-26T11:29:00Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.