Encoding Domain Expertise into Multilevel Models for Source Location
- URL: http://arxiv.org/abs/2305.08657v1
- Date: Mon, 15 May 2023 14:02:35 GMT
- Title: Encoding Domain Expertise into Multilevel Models for Source Location
- Authors: Lawrence A. Bull, Matthew R. Jones, Elizabeth J. Cross, Andrew Duncan,
and Mark Girolami
- Abstract summary: This work captures the statistical correlations and interdependencies between models of a group of systems.
Most interestingly, domain expertise and knowledge of the underlying physics can be encoded in the model at the system, subgroup, or population level.
- Score: 0.5872014229110215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data from populations of systems are prevalent in many industrial
applications. Machines and infrastructure are increasingly instrumented with
sensing systems, emitting streams of telemetry data with complex
interdependencies. In practice, data-centric monitoring procedures tend to
consider these assets (and respective models) as distinct -- operating in
isolation and associated with independent data. In contrast, this work captures
the statistical correlations and interdependencies between models of a group of
systems. Utilising a Bayesian multilevel approach, the value of data can be
extended, since the population can be considered as a whole, rather than
constituent parts. Most interestingly, domain expertise and knowledge of the
underlying physics can be encoded in the model at the system, subgroup, or
population level. We present an example of acoustic emission (time-of-arrival)
mapping for source location, to illustrate how multilevel models naturally lend
themselves to representing aggregate systems in engineering. In particular, we
focus on constraining the combined models with domain knowledge to enhance
transfer learning and enable further insights at the population level.
Related papers
- Learning Latent Dynamics via Invariant Decomposition and
(Spatio-)Temporal Transformers [0.6767885381740952]
We propose a method for learning dynamical systems from high-dimensional empirical data.
We focus on the setting in which data are available from multiple different instances of a system.
We study behaviour through simple theoretical analyses and extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-21T07:52:07Z) - Beyond Just Vision: A Review on Self-Supervised Representation Learning
on Multimodal and Temporal Data [10.006890915441987]
Popularity of self-supervised learning is driven by the fact that traditional models typically require a huge amount of well-annotated data for training.
Self-supervised methods have been introduced to improve the efficiency of training data through discriminative pre-training of models.
We aim to provide the first comprehensive review of multimodal self-supervised learning methods for temporal data.
arXiv Detail & Related papers (2022-06-06T04:59:44Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Hierarchical Bayesian Modelling for Knowledge Transfer Across
Engineering Fleets via Multitask Learning [0.0]
A population-level analysis is proposed to address data sparsity when building predictive models for engineering infrastructure.
Utilising an interpretable hierarchical Bayesian approach and operational fleet data, domain expertise is naturally encoded (and appropriately shared) between different sub-groups.
arXiv Detail & Related papers (2022-04-26T16:02:25Z) - Learning Sequential Latent Variable Models from Multimodal Time Series
Data [6.107812768939553]
We present a self-supervised generative modelling framework to jointly learn a probabilistic latent state representation of multimodal data.
We demonstrate that our approach leads to significant improvements in prediction and representation quality.
arXiv Detail & Related papers (2022-04-21T21:59:24Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - Relating by Contrasting: A Data-efficient Framework for Multimodal
Generative Models [86.9292779620645]
We develop a contrastive framework for generative model learning, allowing us to train the model not just by the commonality between modalities, but by the distinction between "related" and "unrelated" multimodal data.
Under our proposed framework, the generative model can accurately identify related samples from unrelated ones, making it possible to make use of the plentiful unlabeled, unpaired multimodal data.
arXiv Detail & Related papers (2020-07-02T15:08:11Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z) - Interpretable Deep Representation Learning from Temporal Multi-view Data [4.2179426073904995]
We propose a generative model based on variational autoencoder and a recurrent neural network to infer the latent dynamics for multi-view temporal data.
We invoke our proposed model for analyzing three datasets on which we demonstrate the effectiveness and the interpretability of the model.
arXiv Detail & Related papers (2020-05-11T15:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.