MFNets: Data efficient all-at-once learning of multifidelity surrogates
as directed networks of information sources
- URL: http://arxiv.org/abs/2008.02672v2
- Date: Mon, 23 Aug 2021 19:01:00 GMT
- Title: MFNets: Data efficient all-at-once learning of multifidelity surrogates
as directed networks of information sources
- Authors: Alex Gorodetsky and John D. Jakeman and Gianluca Geraci
- Abstract summary: We present an approach for constructing a surrogate from ensembles of information sources of varying cost and accuracy.
The multifidelity surrogate encodes connections between information sources as a directed acyclic graph, and is trained via gradient-based minimization of a nonlinear least squares objective.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an approach for constructing a surrogate from ensembles of
information sources of varying cost and accuracy. The multifidelity surrogate
encodes connections between information sources as a directed acyclic graph,
and is trained via gradient-based minimization of a nonlinear least squares
objective. While the vast majority of state-of-the-art assumes hierarchical
connections between information sources, our approach works with flexibly
structured information sources that may not admit a strict hierarchy. The
formulation has two advantages: (1) increased data efficiency due to
parsimonious multifidelity networks that can be tailored to the application;
and (2) no constraints on the training data -- we can combine noisy, non-nested
evaluations of the information sources. Numerical examples ranging from
synthetic to physics-based computational mechanics simulations indicate the
error in our approach can be orders-of-magnitude smaller, particularly in the
low-data regime, than single-fidelity and hierarchical multifidelity
approaches.
Related papers
- Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Source-Aware Embedding Training on Heterogeneous Information Networks [11.006488894262748]
We propose a scalable unsupervised framework to align the embedding distributions among multiple sources of an Heterogeneous Information Network Embedding.
Experimental results on real-world datasets in a variety of downstream tasks validate the performance of our method over the state-of-the-art heterogeneous information network embedding algorithms.
arXiv Detail & Related papers (2023-07-10T04:22:49Z) - Disentangled Multi-Fidelity Deep Bayesian Active Learning [19.031567953748453]
Multi-fidelity active learning aims to learn a direct mapping from input parameters to simulation outputs at the highest fidelity.
Deep learning-based methods often impose a hierarchical structure in hidden representations, which only supports passing information from low-fidelity to high-fidelity.
We propose a novel framework called Disentangled Multi-fidelity Deep Bayesian Active Learning (D-MFDAL), which learns the surrogate models conditioned on the distribution of functions at multiple fidelities.
arXiv Detail & Related papers (2023-05-07T23:14:58Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Probabilistic Neural Data Fusion for Learning from an Arbitrary Number
of Multi-fidelity Data Sets [0.0]
In this paper, we employ neural networks (NNs) for data fusion in scenarios where data is very scarce.
We introduce a unique NN architecture that converts MF modeling into a nonlinear manifold learning problem.
Our approach provides a high predictive power while quantifying various sources uncertainties.
arXiv Detail & Related papers (2023-01-30T20:27:55Z) - Robust Direct Learning for Causal Data Fusion [14.462235940634969]
We provide a framework for integrating multi-source data that separates the treatment effect from other nuisance functions.
We also propose a causal information-aware weighting function motivated by theoretical insights from the semiparametric efficiency theory.
arXiv Detail & Related papers (2022-11-01T03:33:22Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Deep Transfer Learning for Multi-source Entity Linkage via Domain
Adaptation [63.24594955429465]
Multi-source entity linkage is critical in high-impact applications such as data cleaning and user stitching.
AdaMEL is a deep transfer learning framework that learns generic high-level knowledge to perform multi-source entity linkage.
Our framework achieves state-of-the-art results with 8.21% improvement on average over methods based on supervised learning.
arXiv Detail & Related papers (2021-10-27T15:20:41Z) - Resource-constrained Federated Edge Learning with Heterogeneous Data:
Formulation and Analysis [8.863089484787835]
We propose a distributed approximate Newton-type Newton-type training scheme, namely FedOVA, to solve the heterogeneous statistical challenge brought by heterogeneous data.
FedOVA decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning.
arXiv Detail & Related papers (2021-10-14T17:35:24Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.