Canonical-Correlation-Based Fast Feature Selection for Structural Health Monitoring
- URL: http://arxiv.org/abs/2106.08247v2
- Date: Mon, 9 Sep 2024 07:11:39 GMT
- Title: Canonical-Correlation-Based Fast Feature Selection for Structural Health Monitoring
- Authors: Sikai Zhang, Tingna Wang, Keith Worden, Limin Sun, Elizabeth J. Cross,
- Abstract summary: This paper proposes a fast feature selection algorithm by efficiently computing the sum of squared canonical correlation coefficients between monitored features and target variables of interest in greedy search.
The proposed algorithm is applied to both synthetic and real datasets to illustrate its advantages in terms of computational speed, general classification and regression tasks, as well as damage-sensitive feature selection tasks.
- Score: 4.533223834527272
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Feature selection refers to the process of selecting useful features for machine learning tasks, and it is also a key step for structural health monitoring (SHM). This paper proposes a fast feature selection algorithm by efficiently computing the sum of squared canonical correlation coefficients between monitored features and target variables of interest in greedy search. The proposed algorithm is applied to both synthetic and real datasets to illustrate its advantages in terms of computational speed, general classification and regression tasks, as well as damage-sensitive feature selection tasks. Furthermore, the performance of the proposed algorithm is evaluated under varying environmental conditions and on an edge computing device to investigate its applicability in real-world SHM scenarios. The results show that the proposed algorithm can successfully select useful features with extraordinarily fast computational speed, which implies that the proposed algorithm has great potential where features need to be selected and updated online frequently, or where devices have limited computing capability.
Related papers
- Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Multi-objective Binary Coordinate Search for Feature Selection [0.24578723416255746]
We propose the binary multi-objective coordinate search (MOCS) algorithm to solve large-scale feature selection problems.
Results indicate the significant superiority of our method over NSGA-II, on five real-world large-scale datasets.
arXiv Detail & Related papers (2024-02-20T00:50:26Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Forward-Forward Algorithm for Hyperspectral Image Classification: A
Preliminary Study [0.0]
Forward-forward algorithm (FFA) computes local goodness functions to optimize network parameters.
This study investigates the application of FFA for hyperspectral image classification.
arXiv Detail & Related papers (2023-07-01T05:39:28Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Fair Feature Subset Selection using Multiobjective Genetic Algorithm [0.0]
We present a feature subset selection approach that improves both fairness and accuracy objectives.
We use statistical disparity as a fairness metric and F1-Score as a metric for model performance.
Our experiments on the most commonly used fairness benchmark datasets show that using the evolutionary algorithm we can effectively explore the trade-off between fairness and accuracy.
arXiv Detail & Related papers (2022-04-30T22:51:19Z) - Compactness Score: A Fast Filter Method for Unsupervised Feature
Selection [66.84571085643928]
We propose a fast unsupervised feature selection method, named as, Compactness Score (CSUFS) to select desired features.
Our proposed algorithm seems to be more accurate and efficient compared with existing algorithms.
arXiv Detail & Related papers (2022-01-31T13:01:37Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Algorithmic Stability and Generalization of an Unsupervised Feature
Selection Algorithm [20.564573628659918]
Algorithmic stability is a key characteristic of an algorithm regarding its sensitivity to perturbations of input samples.
In this paper, we propose an innovative unsupervised feature selection algorithm attaining this stability with provable guarantees.
arXiv Detail & Related papers (2020-10-19T12:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.