GAN-MDF: A Method for Multi-fidelity Data Fusion in Digital Twins
- URL: http://arxiv.org/abs/2106.14655v1
- Date: Thu, 24 Jun 2021 06:40:35 GMT
- Title: GAN-MDF: A Method for Multi-fidelity Data Fusion in Digital Twins
- Authors: Lixue Liu, Chao Zhang, Dacheng Tao
- Abstract summary: Internet of Things (IoT) collects real-time data of physical systems, such as smart factory, intelligent robot and healtcare system.
High-fidelity (HF) responses describe the system of interest accurately but are computed costly.
Low-fidelity (LF) responses have a low computational cost but could not meet the required accuracy.
We propose a novel generative adversarial network for MDF in digital twins (GAN-MDF)
- Score: 82.71367011801242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Internet of Things (IoT) collects real-time data of physical systems,
such as smart factory, intelligent robot and healtcare system, and provide
necessary support for digital twins. Depending on the quality and accuracy,
these multi-source data are divided into different fidelity levels.
High-fidelity (HF) responses describe the system of interest accurately but are
computed costly. In contrast, low-fidelity (LF) responses have a low
computational cost but could not meet the required accuracy. Multi-fidelity
data fusion (MDF) methods aims to use massive LF samples and small amounts of
HF samples to develop an accurate and efficient model for describing the system
with a reasonable computation burden. In this paper, we propose a novel
generative adversarial network for MDF in digital twins (GAN-MDF). The
generator of GAN-MDF is composed of two sub-networks: one extracts the LF
features from an input; and the other integrates the input and the extracted LF
features to form the input of the subsequent discriminator. The discriminator
of GAN-MDF identifies whether the generator output is a real sample generated
from HF model. To enhance the stability of GAN-MDF's training, we also
introduce the supervised-loss trick to refine the generator weights during each
iteration of the adversarial training. Compared with the state-of-the-art
methods, the proposed GAN-MDF has the following advantages: 1) it performs well
in the case of either nested or unnested sample structure; 2) there is no
specific assumption on the data distribution; and 3) it has high robustness
even when very few HF samples are provided. The experimental results also
support the validity of GAN-MDF.
Related papers
- Local Flow Matching Generative Models [19.859984725284896]
Flow Matching (FM) is a simulation-free method for learning a continuous and invertible flow to interpolate between two distributions.
We introduce Local Flow Matching (LFM), which learns a sequence of FM sub-models and each matches a diffusion process up to the time of the step size in the data-to-noise direction.
In experiments, we demonstrate the improved training efficiency and competitive generative performance of LFM compared to FM.
arXiv Detail & Related papers (2024-10-03T14:53:10Z) - Multi-scale Quaternion CNN and BiGRU with Cross Self-attention Feature Fusion for Fault Diagnosis of Bearing [5.3598912592106345]
Deep learning has led to significant advances in bearing fault diagnosis (FD)
We propose a novel FD model by integrating multiscale quaternion convolutional neural network (MQCNN), bidirectional gated recurrent unit (BiG), and cross self-attention feature fusion (CSAFF)
arXiv Detail & Related papers (2024-05-25T07:55:02Z) - Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - On the Effects of Heterogeneous Errors on Multi-fidelity Bayesian
Optimization [0.0]
We propose an MF emulation method that learns a noise model for each data source.
We illustrate the performance of our method through analytical examples and engineering problems on materials design.
arXiv Detail & Related papers (2023-09-06T06:26:21Z) - Diversity-enhancing Generative Network for Few-shot Hypothesis
Adaptation [135.80439360370556]
We propose a diversity-enhancing generative network (DEG-Net) for the FHA problem.
It can generate diverse unlabeled data with the help of a kernel independence measure: the Hilbert-Schmidt independence criterion (HSIC)
arXiv Detail & Related papers (2023-07-12T06:29:02Z) - FeDXL: Provable Federated Learning for Deep X-Risk Optimization [105.17383135458897]
We tackle a novel federated learning (FL) problem for optimizing a family of X-risks, to which no existing algorithms are applicable.
The challenges for designing an FL algorithm for X-risks lie in the non-decomability of the objective over multiple machines and the interdependency between different machines.
arXiv Detail & Related papers (2022-10-26T00:23:36Z) - Multi-fidelity Hierarchical Neural Processes [79.0284780825048]
Multi-fidelity surrogate modeling reduces the computational cost by fusing different simulation outputs.
We propose Multi-fidelity Hierarchical Neural Processes (MF-HNP), a unified neural latent variable model for multi-fidelity surrogate modeling.
We evaluate MF-HNP on epidemiology and climate modeling tasks, achieving competitive performance in terms of accuracy and uncertainty estimation.
arXiv Detail & Related papers (2022-06-10T04:54:13Z) - Fault Detection and Diagnosis with Imbalanced and Noisy Data: A Hybrid
Framework for Rotating Machinery [2.580765958706854]
Fault diagnosis plays an essential role in reducing the maintenance costs of rotating machinery manufacturing systems.
Traditional Fault Detection and Diagnosis (FDD) frameworks get poor performances when dealing with real-world circumstances.
This paper proposes a hybrid framework which uses the three aforementioned components to achieve an effective signal-based FDD system.
arXiv Detail & Related papers (2022-02-09T01:09:59Z) - Device Sampling for Heterogeneous Federated Learning: Theory,
Algorithms, and Implementation [24.084053136210027]
We develop a sampling methodology based on graph sequential convolutional networks (GCNs)
We find that our methodology while sampling less than 5% of all devices outperforms conventional federated learning (FedL) substantially both in terms of trained model accuracy and required resource utilization.
arXiv Detail & Related papers (2021-01-04T05:59:50Z) - Feature Quantization Improves GAN Training [126.02828112121874]
Feature Quantization (FQ) for the discriminator embeds both true and fake data samples into a shared discrete space.
Our method can be easily plugged into existing GAN models, with little computational overhead in training.
arXiv Detail & Related papers (2020-04-05T04:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.