Dynamic Feature Selection based on Rule-based Learning for Explainable Classification with Uncertainty Quantification
- URL: http://arxiv.org/abs/2508.02566v1
- Date: Mon, 04 Aug 2025 16:21:43 GMT
- Title: Dynamic Feature Selection based on Rule-based Learning for Explainable Classification with Uncertainty Quantification
- Authors: Javier Fumanal-Idocin, Raquel Fernandez-Peralta, Javier Andreu-Perez,
- Abstract summary: Dynamic feature selection (DFS) offers a compelling alternative to traditional, static feature selection.<n>DFS customizes feature selection per sample, providing insight into the decision-making process for each case.
- Score: 0.7874708385247353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic feature selection (DFS) offers a compelling alternative to traditional, static feature selection by adapting the selected features to each individual sample. Unlike classical methods that apply a uniform feature set, DFS customizes feature selection per sample, providing insight into the decision-making process for each case. DFS is especially significant in settings where decision transparency is key, i.e., clinical decisions; however, existing methods use opaque models, which hinder their applicability in real-life scenarios. This paper introduces a novel approach leveraging a rule-based system as a base classifier for the DFS process, which enhances decision interpretability compared to neural estimators. We also show how this method provides a quantitative measure of uncertainty for each feature query and can make the feature selection process computationally lighter by constraining the feature search space. We also discuss when greedy selection of conditional mutual information is equivalent to selecting features that minimize the difference with respect to the global model predictions. Finally, we demonstrate the competitive performance of our rule-based DFS approach against established and state-of-the-art greedy and RL methods, which are mostly considered opaque, compared to our explainable rule-based system.
Related papers
- Feature-Based vs. GAN-Based Learning from Demonstrations: When and Why [50.191655141020505]
This survey provides a comparative analysis of feature-based and GAN-based approaches to learning from demonstrations.<n>We argue that the dichotomy between feature-based and GAN-based methods is increasingly nuanced.
arXiv Detail & Related papers (2025-07-08T11:45:51Z) - Flexible Selective Inference with Flow-based Transport Maps [7.197592390105458]
This paper introduces a new method that leverages tools from flow-based generative modeling to approximate a potentially complex conditional distribution.<n>We demonstrate that this method enables flexible selective inference by providing valid p-values and confidence sets for adaptively selected hypotheses and parameters.
arXiv Detail & Related papers (2025-06-01T20:05:20Z) - Feature Selection for Latent Factor Models [2.07180164747172]
Feature selection is crucial for pinpointing relevant features in high-dimensional datasets.<n>Traditional feature selection methods for classification use data from all classes to select features for each class.<n>This paper explores feature selection methods that select features for each class separately, using class models based on low-rank generative methods.
arXiv Detail & Related papers (2024-12-13T13:20:10Z) - An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Greedy feature selection: Classifier-dependent feature selection via
greedy methods [2.4374097382908477]
The purpose of this study is to introduce a new approach to feature ranking for classification tasks, called in what follows greedy feature selection.
The benefits of such scheme are investigated theoretically in terms of model capacity indicators, such as the Vapnik-Chervonenkis (VC) dimension or the kernel alignment.
arXiv Detail & Related papers (2024-03-08T08:12:05Z) - Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - Learning to Maximize Mutual Information for Dynamic Feature Selection [13.821253491768168]
We consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information.
We explore a simpler approach of greedily selecting features based on their conditional mutual information.
The proposed method is shown to recover the greedy policy when trained to optimality, and it outperforms numerous existing feature selection methods in our experiments.
arXiv Detail & Related papers (2023-01-02T08:31:56Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - On the Adversarial Robustness of LASSO Based Feature Selection [72.54211869067979]
In the considered model, there is a malicious adversary who can observe the whole dataset, and then will carefully modify the response values or the feature matrix.
We formulate the modification strategy of the adversary as a bi-level optimization problem.
Numerical examples with synthetic and real data illustrate that our method is efficient and effective.
arXiv Detail & Related papers (2020-10-20T05:51:26Z) - On-the-Fly Joint Feature Selection and Classification [16.84451472788859]
We propose a framework to perform joint feature selection and classification on-the-fly.
We derive the optimum solution of the associated optimization problem and analyze its structure.
We evaluate the performance of the proposed algorithms on several public datasets.
arXiv Detail & Related papers (2020-04-21T19:19:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.