Cost-informed dimensionality reduction for structural digital twin technologies
- URL: http://arxiv.org/abs/2409.11236v1
- Date: Tue, 17 Sep 2024 14:37:00 GMT
- Title: Cost-informed dimensionality reduction for structural digital twin technologies
- Authors: Aidan J. Hughes, Keith Worden, Nikolaos Dervilis, Timothy J. Rogers,
- Abstract summary: This paper formulates a decision-theoretic approach to dimensionality reduction for structural asset management.
The aim is to keep incurred misclassification costs to a minimum, as the dimensionality is reduced and discriminatory information may be lost.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classification models are a key component of structural digital twin technologies used for supporting asset management decision-making. An important consideration when developing classification models is the dimensionality of the input, or feature space, used. If the dimensionality is too high, then the `curse of dimensionality' may rear its ugly head; manifesting as reduced predictive performance. To mitigate such effects, practitioners can employ dimensionality reduction techniques. The current paper formulates a decision-theoretic approach to dimensionality reduction for structural asset management. In this approach, the aim is to keep incurred misclassification costs to a minimum, as the dimensionality is reduced and discriminatory information may be lost. This formulation is constructed as an eigenvalue problem, with separabilities between classes weighted according to the cost of misclassifying them when considered in the context of a decision process. The approach is demonstrated using a synthetic case study.
Related papers
- PCA-RAG: Principal Component Analysis for Efficient Retrieval-Augmented Generation [0.0]
High-dimensional language model embeddings can present scalability challenges in terms of storage and latency.
This paper investigates the use of Principal Component Analysis (PCA) to reduce embedding dimensionality.
We show that PCA-based compression offers a viable balance between retrieval fidelity and resource efficiency.
arXiv Detail & Related papers (2025-04-11T09:38:12Z) - Golden Ratio-Based Sufficient Dimension Reduction [6.184279198087624]
We propose a neural network based sufficient dimension reduction method.
It identifies the structural dimension effectively and estimates the central space well.
It takes advantages of approximation capabilities of neural networks for functions in Barron classes and leads to reduced computation cost.
arXiv Detail & Related papers (2024-10-25T04:15:15Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - An evaluation framework for dimensionality reduction through sectional
curvature [59.40521061783166]
In this work, we aim to introduce the first highly non-supervised dimensionality reduction performance metric.
To test its feasibility, this metric has been used to evaluate the performance of the most commonly used dimension reduction algorithms.
A new parameterized problem instance generator has been constructed in the form of a function generator.
arXiv Detail & Related papers (2023-03-17T11:59:33Z) - DimenFix: A novel meta-dimensionality reduction method for feature
preservation [64.0476282000118]
We propose a novel meta-method, DimenFix, which can be operated upon any base dimensionality reduction method that involves a gradient-descent-like process.
By allowing users to define the importance of different features, which is considered in dimensionality reduction, DimenFix creates new possibilities to visualize and understand a given dataset.
arXiv Detail & Related papers (2022-11-30T05:35:22Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - Rethinking Cost-sensitive Classification in Deep Learning via
Adversarial Data Augmentation [4.479834103607382]
Cost-sensitive classification is critical in applications where misclassification errors widely vary in cost.
This paper proposes a cost-sensitive adversarial data augmentation framework to make over- parameterized models cost-sensitive.
Our method can effectively minimize the overall cost and reduce critical errors, while achieving comparable performance in terms of overall accuracy.
arXiv Detail & Related papers (2022-08-24T19:00:30Z) - Cost-effective Variational Active Entity Resolution [4.238343046459798]
We devise an entity resolution method that builds on the robustness conferred by deep autoencoders to reduce human-involvement costs.
Specifically, we reduce the cost of training deep entity resolution models by performing unsupervised representation learning.
Finally, we reduce the cost of labelling training data through an active learning approach that builds on the properties conferred by the use of deep autoencoders.
arXiv Detail & Related papers (2020-11-20T13:47:11Z) - The Dilemma Between Data Transformations and Adversarial Robustness for
Time Series Application Systems [1.2056495277232115]
Adrial examples, or nearly indistinguishable inputs created by an attacker, significantly reduce machine learning accuracy.
This work explores how data transformations may impact an adversary's ability to create effective adversarial samples on a recurrent neural network.
A data transformation technique reduces the vulnerability to adversarial examples only if it approximates the dataset's intrinsic dimension.
arXiv Detail & Related papers (2020-06-18T22:43:37Z) - Dimensionality Reduction for Sentiment Classification: Evolving for the
Most Prominent and Separable Features [4.156782836736784]
In sentiment classification, the enormous amount of textual data, its immense dimensionality, and inherent noise make it extremely difficult for machine learning classifiers to extract high-level and complex abstractions.
In the existing dimensionality reduction techniques, the number of components needs to be set manually which results in loss of the most prominent features.
We have proposed a new framework that consists of two-dimensionality reduction techniques i.e., Sentiment Term Presence Count (SentiTPC) and Sentiment Term Presence Ratio (SentiTPR)
arXiv Detail & Related papers (2020-06-01T09:46:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.