An Evolutionary Multitasking Algorithm with Multiple Filtering for
High-Dimensional Feature Selection
- URL: http://arxiv.org/abs/2212.08854v1
- Date: Sat, 17 Dec 2022 12:06:46 GMT
- Title: An Evolutionary Multitasking Algorithm with Multiple Filtering for
High-Dimensional Feature Selection
- Authors: Lingjie Li, Manlin Xuan, Qiuzhen Lin, Min Jiang, Zhong Ming, Kay Chen
Tan
- Abstract summary: evolutionary multitasking (EMT) has been successfully used in the field of high-dimensional classification.
This paper devises a new EMT for FS in high-dimensional classification, which first adopts different filtering methods to produce multiple tasks.
A competitive swarm is modified to simultaneously solve these relevant FS tasks by transferring useful knowledge among them.
- Score: 17.63977212537738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, evolutionary multitasking (EMT) has been successfully used in the
field of high-dimensional classification. However, the generation of multiple
tasks in the existing EMT-based feature selection (FS) methods is relatively
simple, using only the Relief-F method to collect related features with similar
importance into one task, which cannot provide more diversified tasks for
knowledge transfer. Thus, this paper devises a new EMT algorithm for FS in
high-dimensional classification, which first adopts different filtering methods
to produce multiple tasks and then modifies a competitive swarm optimizer to
efficiently solve these related tasks via knowledge transfer. First, a
diversified multiple task generation method is designed based on multiple
filtering methods, which generates several relevant low-dimensional FS tasks by
eliminating irrelevant features. In this way, useful knowledge for solving
simple and relevant tasks can be transferred to simplify and speed up the
solution of the original high-dimensional FS task. Then, a competitive swarm
optimizer is modified to simultaneously solve these relevant FS tasks by
transferring useful knowledge among them. Numerous empirical results
demonstrate that the proposed EMT-based FS method can obtain a better feature
subset than several state-of-the-art FS methods on eighteen high-dimensional
datasets.
Related papers
- MaZO: Masked Zeroth-Order Optimization for Multi-Task Fine-Tuning of Large Language Models [26.980104922985326]
We present MaZO, the first framework specifically designed for multi-task LLM fine-tuning under ZO optimization.
MaZO tackles these challenges at the parameter level through two key innovations: a weight importance metric to identify critical parameters and a multi-task weight update mask to selectively update these parameters.
Experiments demonstrate that MaZO achieves state-of-the-art performance, surpassing even multi-task learning methods designed for first-order optimization.
arXiv Detail & Related papers (2025-02-17T07:28:52Z) - Coreset-Based Task Selection for Sample-Efficient Meta-Reinforcement Learning [1.2952597101899859]
We study task selection to enhance sample efficiency in model-agnostic meta-reinforcement learning (MAML-RL)
We propose a coreset-based task selection approach that selects a weighted subset of tasks based on how diverse they are in gradient space.
We numerically validate this trend across multiple RL benchmark problems, illustrating the benefits of task selection beyond the LQR baseline.
arXiv Detail & Related papers (2025-02-04T14:09:00Z) - Robust Analysis of Multi-Task Learning Efficiency: New Benchmarks on Light-Weighed Backbones and Effective Measurement of Multi-Task Learning Challenges by Feature Disentanglement [69.51496713076253]
In this paper, we focus on the aforementioned efficiency aspects of existing MTL methods.
We first carry out large-scale experiments of the methods with smaller backbones and on a the MetaGraspNet dataset as a new test ground.
We also propose Feature Disentanglement measure as a novel and efficient identifier of the challenges in MTL.
arXiv Detail & Related papers (2024-02-05T22:15:55Z) - Towards Multi-Objective High-Dimensional Feature Selection via
Evolutionary Multitasking [63.91518180604101]
This paper develops a novel EMT framework for high-dimensional feature selection problems, namely MO-FSEMT.
A task-specific knowledge transfer mechanism is designed to leverage the advantage information of each task, enabling the discovery and effective transmission of high-quality solutions.
arXiv Detail & Related papers (2024-01-03T06:34:39Z) - Multitasking Evolutionary Algorithm Based on Adaptive Seed Transfer for
Combinatorial Problem [2.869730777051168]
evolutionary multitasking optimization (EMTO) has become an emerging topic in the EC community.
M TEA-AST can adaptively transfer knowledge in both same-domain and cross-domain many-task environments.
The proposed method shows competitive performance compared to other state-of-the-art EMTOs in experiments consisting of four COPs.
arXiv Detail & Related papers (2023-08-24T08:43:32Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Transfer Learning for Sequence Generation: from Single-source to
Multi-source [50.34044254589968]
We propose a two-stage finetuning method to alleviate the pretrain-finetune discrepancy and introduce a novel MSG model with a fine encoder to learn better representations in MSG tasks.
Our approach achieves new state-of-the-art results on the WMT17 APE task and multi-source translation task using the WMT14 test set.
arXiv Detail & Related papers (2021-05-31T09:12:38Z) - Efficient Feature Transformations for Discriminative and Generative
Continual Learning [98.10425163678082]
We propose a simple task-specific feature map transformation strategy for continual learning.
Theses provide powerful flexibility for learning new tasks, achieved with minimal parameters added to the base architecture.
We demonstrate the efficacy and efficiency of our method with an extensive set of experiments in discriminative (CIFAR-100 and ImageNet-1K) and generative sequences of tasks.
arXiv Detail & Related papers (2021-03-25T01:48:14Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.