Better Together: Using Multi-task Learning to Improve Feature Selection
within Structural Datasets
- URL: http://arxiv.org/abs/2303.04486v1
- Date: Wed, 8 Mar 2023 10:19:55 GMT
- Title: Better Together: Using Multi-task Learning to Improve Feature Selection
within Structural Datasets
- Authors: S.C. Bee, E. Papatheou, M Haywood-Alexander, R.S. Mills, L.A. Bull, K.
Worden and N. Dervilis
- Abstract summary: This paper presents the use of multi-task learning (MTL) to provide automatic feature selection for a structural dataset.
The classification task is to differentiate between the port and starboard side of a tailplane, for samples from two aircraft of the same model.
The MTL results were interpretable, highlighting structural differences as opposed to differences in experimental set-up.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: There have been recent efforts to move to population-based structural health
monitoring (PBSHM) systems. One area of PBSHM which has been recognised for
potential development is the use of multi-task learning (MTL); algorithms which
differ from traditional independent learning algorithms. Presented here is the
use of the MTL, ''Joint Feature Selection with LASSO'', to provide automatic
feature selection for a structural dataset. The classification task is to
differentiate between the port and starboard side of a tailplane, for samples
from two aircraft of the same model. The independent learner produced perfect
F1 scores but had poor engineering insight; whereas the MTL results were
interpretable, highlighting structural differences as opposed to differences in
experimental set-up.
Related papers
- UniDiff: Advancing Vision-Language Models with Generative and
Discriminative Learning [86.91893533388628]
This paper presents UniDiff, a unified multi-modal model that integrates image-text contrastive learning (ITC), text-conditioned image synthesis learning (IS), and reciprocal semantic consistency modeling (RSC)
UniDiff demonstrates versatility in both multi-modal understanding and generative tasks.
arXiv Detail & Related papers (2023-06-01T15:39:38Z) - A-SFS: Semi-supervised Feature Selection based on Multi-task
Self-supervision [1.3190581566723918]
We introduce a deep learning-based self-supervised mechanism into feature selection problems.
A batch-attention mechanism is designed to generate feature weights according to batch-based feature selection patterns.
Experimental results show that A-SFS achieves the highest accuracy in most datasets.
arXiv Detail & Related papers (2022-07-19T04:22:27Z) - Joint Multi-view Unsupervised Feature Selection and Graph Learning [18.303477722460247]
This paper presents a joint multi-view unsupervised feature selection and graph learning (JMVFG) approach.
We formulate the multi-view feature selection with decomposition, where each target matrix is decomposed into a view-specific basis matrix.
Experiments on a variety of real-world multi-view datasets demonstrate the superiority of our approach for both the multi-view feature selection and graph learning tasks.
arXiv Detail & Related papers (2022-04-18T10:50:03Z) - MGA-VQA: Multi-Granularity Alignment for Visual Question Answering [75.55108621064726]
Learning to answer visual questions is a challenging task since the multi-modal inputs are within two feature spaces.
We propose Multi-Granularity Alignment architecture for Visual Question Answering task (MGA-VQA)
Our model splits alignment into different levels to achieve learning better correlations without needing additional data and annotations.
arXiv Detail & Related papers (2022-01-25T22:30:54Z) - Multi-level Second-order Few-shot Learning [111.0648869396828]
We propose a Multi-level Second-order (MlSo) few-shot learning network for supervised or unsupervised few-shot image classification and few-shot action recognition.
We leverage so-called power-normalized second-order base learner streams combined with features that express multiple levels of visual abstraction.
We demonstrate respectable results on standard datasets such as Omniglot, mini-ImageNet, tiered-ImageNet, Open MIC, fine-grained datasets such as CUB Birds, Stanford Dogs and Cars, and action recognition datasets such as HMDB51, UCF101, and mini-MIT.
arXiv Detail & Related papers (2022-01-15T19:49:00Z) - Feature Selection for Efficient Local-to-Global Bayesian Network
Structure Learning [18.736822756439437]
We propose an efficient F2SL (feature selection-based structure learning) approach to local-to-global BN structure learning.
The F2SL approach first employs the MRMR approach to learn a DAG skeleton, then orients edges in the skeleton.
Compared to the state-of-the-art local-to-global BN learning algorithms, the experiments validated that the proposed algorithms are more efficient and provide competitive structure learning quality.
arXiv Detail & Related papers (2021-12-20T07:44:38Z) - Event Classification with Multi-step Machine Learning [0.0]
Multi-step Machine Learning (ML) is organized into connected sub-tasks with known intermediate inference goals.
Differentiable Architecture Search (DARTS) and Single Path One-Shot NAS (SPOS-NAS) are tested, where the construction of loss functions is improved to keep all ML models smoothly learning.
Using DARTS and SPOS-NAS as an optimization and selection as well as the connections for multi-step machine learning systems, we find that (1) such a system can quickly and successfully select highly performant model combinations, and (2) the selected models are consistent with baseline algorithms, such as grid search
arXiv Detail & Related papers (2021-06-04T07:22:05Z) - Learning Modality-Specific Representations with Self-Supervised
Multi-Task Learning for Multimodal Sentiment Analysis [11.368438990334397]
We develop a self-supervised learning strategy to acquire independent unimodal supervisions.
We conduct extensive experiments on three public multimodal baseline datasets.
Our method achieves comparable performance than human-annotated unimodal labels.
arXiv Detail & Related papers (2021-02-09T14:05:02Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - DiVA: Diverse Visual Feature Aggregation for Deep Metric Learning [83.48587570246231]
Visual Similarity plays an important role in many computer vision applications.
Deep metric learning (DML) is a powerful framework for learning such similarities.
We propose and study multiple complementary learning tasks, targeting conceptually different data relationships.
We learn a single model to aggregate their training signals, resulting in strong generalization and state-of-the-art performance.
arXiv Detail & Related papers (2020-04-28T12:26:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.