Improving decision-making via risk-based active learning: Probabilistic
discriminative classifiers
- URL: http://arxiv.org/abs/2206.11616v1
- Date: Thu, 23 Jun 2022 10:51:42 GMT
- Title: Improving decision-making via risk-based active learning: Probabilistic
discriminative classifiers
- Authors: Aidan J. Hughes, Paul Gardner, Lawrence A. Bull, Nikolaos Dervilis,
Keith Worden
- Abstract summary: descriptive labels for measured data corresponding to health-states of monitored systems are often unavailable.
One approach to dealing with this problem is risk-based active learning.
The current paper demonstrates several advantages of using an alternative type of classifier -- discriminative models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gaining the ability to make informed decisions on operation and maintenance
of structures provides motivation for the implementation of structural health
monitoring (SHM) systems. However, descriptive labels for measured data
corresponding to health-states of the monitored system are often unavailable.
This issue limits the applicability of fully-supervised machine learning
paradigms for the development of statistical classifiers to be used in
decision-support in SHM systems. One approach to dealing with this problem is
risk-based active learning. In such an approach, data-label querying is guided
according to the expected value of perfect information for incipient data
points. For risk-based active learning in SHM, the value of information is
evaluated with respect to a maintenance decision process, and the data-label
querying corresponds to the inspection of a structure to determine its health
state.
In the context of SHM, risk-based active learning has only been considered
for generative classifiers. The current paper demonstrates several advantages
of using an alternative type of classifier -- discriminative models. Using the
Z24 Bridge dataset as a case study, it is shown that discriminative classifiers
have benefits, in the context of SHM decision-support, including improved
robustness to sampling bias, and reduced expenditure on structural inspections.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Bring Your Own Data! Self-Supervised Evaluation for Large Language
Models [52.15056231665816]
We propose a framework for self-supervised evaluation of Large Language Models (LLMs)
We demonstrate self-supervised evaluation strategies for measuring closed-book knowledge, toxicity, and long-range context dependence.
We find strong correlations between self-supervised and human-supervised evaluations.
arXiv Detail & Related papers (2023-06-23T17:59:09Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - Mitigating sampling bias in risk-based active learning via an EM
algorithm [0.0]
Risk-based active learning is an approach to developing statistical classifiers for online decision-support.
Data-label querying is guided according to the expected value of perfect information for incipient data points.
Semi-supervised approach counteracts sampling bias by incorporating pseudo-labels for unlabelled data via an EM algorithm.
arXiv Detail & Related papers (2022-06-25T08:48:25Z) - Modeling Disagreement in Automatic Data Labelling for Semi-Supervised
Learning in Clinical Natural Language Processing [2.016042047576802]
We investigate the quality of uncertainty estimates from a range of current state-of-the-art predictive models applied to the problem of observation detection in radiology reports.
arXiv Detail & Related papers (2022-05-29T20:20:49Z) - Self-Certifying Classification by Linearized Deep Assignment [65.0100925582087]
We propose a novel class of deep predictors for classifying metric data on graphs within PAC-Bayes risk certification paradigm.
Building on the recent PAC-Bayes literature and data-dependent priors, this approach enables learning posterior distributions on the hypothesis space.
arXiv Detail & Related papers (2022-01-26T19:59:14Z) - On robust risk-based active-learning algorithms for enhanced decision
support [0.0]
Classification models are a fundamental component of physical-asset management technologies such as structural health monitoring (SHM) systems and digital twins.
The paper proposes two novel approaches to counteract the effects of sampling bias: textitsemi-supervised learning, and textitdiscriminative classification models.
arXiv Detail & Related papers (2022-01-07T17:25:41Z) - On risk-based active learning for structural health monitoring [0.0]
This paper presents a risk-based formulation of active learning for structural health monitoring systems.
The querying of class labels can be mapped onto the inspection of a structure of interest in order to determine its health state.
arXiv Detail & Related papers (2021-05-12T12:34:03Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Implicit supervision for fault detection and segmentation of emerging
fault types with Deep Variational Autoencoders [1.160208922584163]
We propose a variational autoencoder (VAE) with labeled and unlabeled samples while inducing implicit supervision on the latent representation of the healthy conditions.
This creates a compact and informative latent representation that allows good detection and segmentation of unseen fault types.
In an extensive comparison, we demonstrate that the proposed method outperforms other learning strategies.
arXiv Detail & Related papers (2019-12-28T18:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.