AEFS: Adaptive Early Feature Selection for Deep Recommender Systems
- URL: http://arxiv.org/abs/2509.12076v1
- Date: Mon, 15 Sep 2025 16:04:24 GMT
- Title: AEFS: Adaptive Early Feature Selection for Deep Recommender Systems
- Authors: Fan Hu, Gaofeng Lu, Jun Chen, Chaonan Guo, Yuekui Yang, Xirong Li,
- Abstract summary: Feature selection has emerged as a crucial technique in refining recommender systems.<n>Recent advancements leveraging Automated Machine Learning (AutoML) has drawn significant attention.<n>We introduce Adaptive Early Feature Selection (AEFS), a very simple method that not only adaptively selects informative features for each instance, but also significantly reduces the activated parameters of the embedding layer.
- Score: 12.331161467904531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Feature selection has emerged as a crucial technique in refining recommender systems. Recent advancements leveraging Automated Machine Learning (AutoML) has drawn significant attention, particularly in two main categories: early feature selection and late feature selection, differentiated by whether the selection occurs before or after the embedding layer. The early feature selection selects a fixed subset of features and retrains the model, while the late feature selection, known as adaptive feature selection, dynamically adjusts feature choices for each data instance, recognizing the variability in feature significance. Although adaptive feature selection has shown remarkable improvements in performance, its main drawback lies in its post-embedding layer feature selection. This process often becomes cumbersome and inefficient in large-scale recommender systems with billions of ID-type features, leading to a highly sparse and parameter-heavy embedding layer. To overcome this, we introduce Adaptive Early Feature Selection (AEFS), a very simple method that not only adaptively selects informative features for each instance, but also significantly reduces the activated parameters of the embedding layer. AEFS employs a dual-model architecture, encompassing an auxiliary model dedicated to feature selection and a main model responsible for prediction. To ensure effective alignment between these two models, we incorporate two collaborative training loss constraints. Our extensive experiments on three benchmark datasets validate the efficiency and effectiveness of our approach. Notably, AEFS matches the performance of current state-of-theart Adaptive Late Feature Selection methods while achieving a significant reduction of 37. 5% in the activated parameters of the embedding layer. AEFS is open-source at https://github. com/fly-dragon211/AEFS .
Related papers
- ssToken: Self-modulated and Semantic-aware Token Selection for LLM Fine-tuning [51.133569963553576]
ssToken is a Self-modulated and Semantic-aware Token Selection approach.<n>We show that both self-modulated selection and semantic-aware selection alone outperform full-data fine-tuning.
arXiv Detail & Related papers (2025-10-21T03:21:04Z) - SELF: Surrogate-light Feature Selection with Large Language Models in Deep Recommender Systems [51.09233156090496]
SurrogatE-Light Feature selection method for deep recommender systems.<n> SELF integrates semantic reasoning from Large Language Models with task-specific learning from surrogate models.<n> Comprehensive experiments on three public datasets from real-world recommender platforms validate the effectiveness of SELF.
arXiv Detail & Related papers (2024-12-11T16:28:18Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - Knockoff-Guided Feature Selection via A Single Pre-trained Reinforced
Agent [44.84307718534031]
We introduce an innovative framework for feature selection guided by knockoff features and optimized through reinforcement learning.
A deep Q-network, pre-trained with the original features and their corresponding pseudo labels, is employed to improve the efficacy of the exploration process.
A new epsilon-greedy strategy is used, incorporating insights from the pseudo labels to make the feature selection process more effective.
arXiv Detail & Related papers (2024-03-06T19:58:19Z) - Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - AFS-BM: Enhancing Model Performance through Adaptive Feature Selection with Binary Masking [0.0]
We introduce the "Adaptive Feature Selection with Binary Masking" (AFS-BM)
We do the joint optimization and binary masking to continuously adapt the set of features and model parameters during the training process.
Our results show that AFS-BM makes significant improvement in terms of accuracy and requires significantly less computational complexity.
arXiv Detail & Related papers (2024-01-20T15:09:41Z) - Class-Imbalanced Semi-Supervised Learning for Large-Scale Point Cloud
Semantic Segmentation via Decoupling Optimization [64.36097398869774]
Semi-supervised learning (SSL) has been an active research topic for large-scale 3D scene understanding.
The existing SSL-based methods suffer from severe training bias due to class imbalance and long-tail distributions of the point cloud data.
We introduce a new decoupling optimization framework, which disentangles feature representation learning and classifier in an alternative optimization manner to shift the bias decision boundary effectively.
arXiv Detail & Related papers (2024-01-13T04:16:40Z) - A Performance-Driven Benchmark for Feature Selection in Tabular Deep
Learning [131.2910403490434]
Data scientists typically collect as many features as possible into their datasets, and even engineer new features from existing ones.
Existing benchmarks for tabular feature selection consider classical downstream models, toy synthetic datasets, or do not evaluate feature selectors on the basis of downstream performance.
We construct a challenging feature selection benchmark evaluated on downstream neural networks including transformers.
We also propose an input-gradient-based analogue of Lasso for neural networks that outperforms classical feature selection methods on challenging problems.
arXiv Detail & Related papers (2023-11-10T05:26:10Z) - AutoField: Automating Feature Selection in Deep Recommender Systems [36.70138179483737]
Feature selection is a critical process in developing deep learning-based recommender systems.
We propose an AutoML framework that can adaptively select the essential feature fields in an automatic manner.
arXiv Detail & Related papers (2022-04-19T18:06:02Z) - Model-free feature selection to facilitate automatic discovery of
divergent subgroups in tabular data [4.551615447454768]
We propose a model-free and sparsity-based automatic feature selection (SAFS) framework to facilitate automatic discovery of divergent subgroups.
We validated SAFS across two publicly available datasets (MIMIC-III and Allstate Claims) and compared it with six existing feature selection methods.
arXiv Detail & Related papers (2022-03-08T20:42:56Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.