AutoFS: Automated Feature Selection via Diversity-aware Interactive
Reinforcement Learning
- URL: http://arxiv.org/abs/2008.12001v3
- Date: Wed, 16 Sep 2020 08:12:04 GMT
- Title: AutoFS: Automated Feature Selection via Diversity-aware Interactive
Reinforcement Learning
- Authors: Wei Fan, Kunpeng Liu, Hao Liu, Pengyang Wang, Yong Ge and Yanjie Fu
- Abstract summary: We study the problem of balancing effectiveness and efficiency in automated feature selection.
Motivated by such a computational dilemma, this study is to develop a novel feature space navigation method.
- Score: 34.33231470225591
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study the problem of balancing effectiveness and efficiency
in automated feature selection. Feature selection is a fundamental intelligence
for machine learning and predictive analysis. After exploring many feature
selection methods, we observe a computational dilemma: 1) traditional feature
selection methods (e.g., mRMR) are mostly efficient, but difficult to identify
the best subset; 2) the emerging reinforced feature selection methods
automatically navigate feature space to explore the best subset, but are
usually inefficient. Are automation and efficiency always apart from each
other? Can we bridge the gap between effectiveness and efficiency under
automation? Motivated by such a computational dilemma, this study is to develop
a novel feature space navigation method. To that end, we propose an Interactive
Reinforced Feature Selection (IRFS) framework that guides agents by not just
self-exploration experience, but also diverse external skilled trainers to
accelerate learning for feature exploration. Specifically, we formulate the
feature selection problem into an interactive reinforcement learning framework.
In this framework, we first model two trainers skilled at different searching
strategies: (1) KBest based trainer; (2) Decision Tree based trainer. We then
develop two strategies: (1) to identify assertive and hesitant agents to
diversify agent training, and (2) to enable the two trainers to take the
teaching role in different stages to fuse the experiences of the trainers and
diversify teaching process. Such a hybrid teaching strategy can help agents to
learn broader knowledge, and, thereafter, be more effective. Finally, we
present extensive experiments on real-world datasets to demonstrate the
improved performances of our method: more efficient than existing reinforced
selection and more effective than classic selection.
Related papers
- Automation and Feature Selection Enhancement with Reinforcement Learning (RL) [0.0]
Reinforcement learning integrated with decision tree improves feature knowledge, state representation and selection efficiency.
Monte Carlo-based reinforced feature selection(MCRFS), a single-agent feature selection method reduces computational burden.
A dual-agent RL framework is also introduced that collectively selects features and instances, capturing the interactions between them.
arXiv Detail & Related papers (2025-03-15T04:30:55Z) - Multi-Type Preference Learning: Empowering Preference-Based Reinforcement Learning with Equal Preferences [12.775486996512434]
Preference-Based reinforcement learning learns directly from the preferences of human teachers regarding agent behaviors.
Existing PBRL methods often learn from explicit preferences, neglecting the possibility that teachers may choose equal preferences.
We propose a novel PBRL method, Multi-Type Preference Learning (MTPL), which allows simultaneous learning from equal preferences while leveraging existing methods for learning from explicit preferences.
arXiv Detail & Related papers (2024-09-11T13:43:49Z) - Is Interpretable Machine Learning Effective at Feature Selection for Neural Learning-to-Rank? [15.757181795925336]
Neural ranking models have become increasingly popular for real-world search and recommendation systems.
Unlike their tree-based counterparts, neural models are much less interpretable.
This is particularly disadvantageous since interpretability is highly important for real-world systems.
arXiv Detail & Related papers (2024-05-13T14:26:29Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - The Paradox of Choice: Using Attention in Hierarchical Reinforcement
Learning [59.777127897688594]
We present an online, model-free algorithm to learn affordances that can be used to further learn subgoal options.
We investigate the role of hard versus soft attention in training data collection, abstract value learning in long-horizon tasks, and handling a growing number of choices.
arXiv Detail & Related papers (2022-01-24T13:18:02Z) - Efficient Reinforced Feature Selection via Early Stopping Traverse
Strategy [36.890295071860166]
We propose a single-agent Monte Carlo based reinforced feature selection (MCRFS) method.
We also propose two efficiency improvement strategies, i.e., early stopping (ES) strategy and reward-level interactive (RI) strategy.
arXiv Detail & Related papers (2021-09-29T03:51:13Z) - PEBBLE: Feedback-Efficient Interactive Reinforcement Learning via
Relabeling Experience and Unsupervised Pre-training [94.87393610927812]
We present an off-policy, interactive reinforcement learning algorithm that capitalizes on the strengths of both feedback and off-policy learning.
We demonstrate that our approach is capable of learning tasks of higher complexity than previously considered by human-in-the-loop methods.
arXiv Detail & Related papers (2021-06-09T14:10:50Z) - Training ELECTRA Augmented with Multi-word Selection [53.77046731238381]
We present a new text encoder pre-training method that improves ELECTRA based on multi-task learning.
Specifically, we train the discriminator to simultaneously detect replaced tokens and select original tokens from candidate sets.
arXiv Detail & Related papers (2021-05-31T23:19:00Z) - Interactive Reinforcement Learning for Feature Selection with Decision
Tree in the Loop [41.66297299506421]
We study the problem of balancing effectiveness and efficiency in automated feature selection.
We propose a novel interactive and closed-loop architecture to simultaneously model interactive reinforcement learning (IRL) and decision tree feedback (DTF)
We present extensive experiments on real-world datasets to show the improved performance.
arXiv Detail & Related papers (2020-10-02T18:09:57Z) - Towards Efficient Processing and Learning with Spikes: New Approaches
for Multi-Spike Learning [59.249322621035056]
We propose two new multi-spike learning rules which demonstrate better performance over other baselines on various tasks.
In the feature detection task, we re-examine the ability of unsupervised STDP with its limitations being presented.
Our proposed learning rules can reliably solve the task over a wide range of conditions without specific constraints being applied.
arXiv Detail & Related papers (2020-05-02T06:41:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.