Hierarchical Bayesian Modelling for Knowledge Transfer Across
Engineering Fleets via Multitask Learning
- URL: http://arxiv.org/abs/2204.12404v4
- Date: Fri, 12 May 2023 16:47:11 GMT
- Title: Hierarchical Bayesian Modelling for Knowledge Transfer Across
Engineering Fleets via Multitask Learning
- Authors: L.A. Bull, D. Di Francesco, M. Dhada, O. Steinert, T. Lindgren, A.K.
Parlikad, A.B. Duncan, M. Girolami
- Abstract summary: A population-level analysis is proposed to address data sparsity when building predictive models for engineering infrastructure.
Utilising an interpretable hierarchical Bayesian approach and operational fleet data, domain expertise is naturally encoded (and appropriately shared) between different sub-groups.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A population-level analysis is proposed to address data sparsity when
building predictive models for engineering infrastructure. Utilising an
interpretable hierarchical Bayesian approach and operational fleet data, domain
expertise is naturally encoded (and appropriately shared) between different
sub-groups, representing (i) use-type, (ii) component, or (iii) operating
condition. Specifically, domain expertise is exploited to constrain the model
via assumptions (and prior distributions) allowing the methodology to
automatically share information between similar assets, improving the survival
analysis of a truck fleet and power prediction in a wind farm. In each asset
management example, a set of correlated functions is learnt over the fleet, in
a combined inference, to learn a population model. Parameter estimation is
improved when sub-fleets share correlated information at different levels of
the hierarchy. In turn, groups with incomplete data automatically borrow
statistical strength from those that are data-rich. The statistical
correlations enable knowledge transfer via Bayesian transfer learning, and the
correlations can be inspected to inform which assets share information for
which effect (i.e. parameter). Both case studies demonstrate the wide
applicability to practical infrastructure monitoring, since the approach is
naturally adapted between interpretable fleet models of different in situ
examples.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Encoding Domain Expertise into Multilevel Models for Source Location [0.5872014229110215]
This work captures the statistical correlations and interdependencies between models of a group of systems.
Most interestingly, domain expertise and knowledge of the underlying physics can be encoded in the model at the system, subgroup, or population level.
arXiv Detail & Related papers (2023-05-15T14:02:35Z) - Feature Correlation-guided Knowledge Transfer for Federated
Self-supervised Learning [19.505644178449046]
We propose a novel and general method named Federated Self-supervised Learning with Feature-correlation based Aggregation (FedFoA)
Our insight is to utilize feature correlation to align the feature mappings and calibrate the local model updates across clients during their local training process.
We prove that FedFoA is a model-agnostic training framework and can be easily compatible with state-of-the-art unsupervised FL methods.
arXiv Detail & Related papers (2022-11-14T13:59:50Z) - Beyond Transfer Learning: Co-finetuning for Action Localisation [64.07196901012153]
We propose co-finetuning -- simultaneously training a single model on multiple upstream'' and downstream'' tasks.
We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data.
We also show how we can easily extend our approach to multiple upstream'' datasets to further improve performance.
arXiv Detail & Related papers (2022-07-08T10:25:47Z) - CDKT-FL: Cross-Device Knowledge Transfer using Proxy Dataset in Federated Learning [27.84845136697669]
We develop a novel knowledge distillation-based approach to study the extent of knowledge transfer between the global model and local models.
We show the proposed method achieves significant speedups and high personalized performance of local models.
arXiv Detail & Related papers (2022-04-04T14:49:19Z) - Federated Learning Framework Coping with Hierarchical Heterogeneity in
Cooperative ITS [10.087704332539161]
We introduce a federated learning framework coping with Hierarchical Heterogeneity (H2-Fed)
The framework exploits data from connected public traffic agents in vehicular networks without affecting user data privacy.
arXiv Detail & Related papers (2022-04-01T05:33:54Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - Accuracy on the Line: On the Strong Correlation Between
Out-of-Distribution and In-Distribution Generalization [89.73665256847858]
We show that out-of-distribution performance is strongly correlated with in-distribution performance for a wide range of models and distribution shifts.
Specifically, we demonstrate strong correlations between in-distribution and out-of-distribution performance on variants of CIFAR-10 & ImageNet.
We also investigate cases where the correlation is weaker, for instance some synthetic distribution shifts from CIFAR-10-C and the tissue classification dataset Camelyon17-WILDS.
arXiv Detail & Related papers (2021-07-09T19:48:23Z) - Probing transfer learning with a model of synthetic correlated datasets [11.53207294639557]
Transfer learning can significantly improve the sample efficiency of neural networks.
We re-think a solvable model of synthetic data as a framework for modeling correlation between data-sets.
We show that our model can capture a range of salient features of transfer learning with real data.
arXiv Detail & Related papers (2021-06-09T22:15:41Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.