Information-theoretic Quantification of High-order Feature Effects in Classification Problems
- URL: http://arxiv.org/abs/2507.04362v1
- Date: Sun, 06 Jul 2025 11:50:30 GMT
- Title: Information-theoretic Quantification of High-order Feature Effects in Classification Problems
- Authors: Ivan Lazic, Chiara BarĂ , Marta Iovino, Sebastiano Stramaglia, Niksa Jakovljevic, Luca Faes,
- Abstract summary: We present an information-theoretic extension of the High-order interactions for Feature importance (Hi-Fi) method.<n>Our framework decomposes feature contributions into unique, synergistic, and redundant components.<n>Results indicate that the proposed estimator accurately recovers theoretical and expected findings.
- Score: 0.19791587637442676
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding the contribution of individual features in predictive models remains a central goal in interpretable machine learning, and while many model-agnostic methods exist to estimate feature importance, they often fall short in capturing high-order interactions and disentangling overlapping contributions. In this work, we present an information-theoretic extension of the High-order interactions for Feature importance (Hi-Fi) method, leveraging Conditional Mutual Information (CMI) estimated via a k-Nearest Neighbor (kNN) approach working on mixed discrete and continuous random variables. Our framework decomposes feature contributions into unique, synergistic, and redundant components, offering a richer, model-independent understanding of their predictive roles. We validate the method using synthetic datasets with known Gaussian structures, where ground truth interaction patterns are analytically derived, and further test it on non-Gaussian and real-world gene expression data from TCGA-BRCA. Results indicate that the proposed estimator accurately recovers theoretical and expected findings, providing a potential use case for developing feature selection algorithms or model development based on interaction analysis.
Related papers
- Detecting and Pruning Prominent but Detrimental Neurons in Large Language Models [68.57424628540907]
Large language models (LLMs) often develop learned mechanisms specialized to specific datasets.<n>We introduce a fine-tuning approach designed to enhance generalization by identifying and pruning neurons associated with dataset-specific mechanisms.<n>Our method employs Integrated Gradients to quantify each neuron's influence on high-confidence predictions, pinpointing those that disproportionately contribute to dataset-specific performance.
arXiv Detail & Related papers (2025-07-12T08:10:10Z) - Accurate and Uncertainty-Aware Multi-Task Prediction of HEA Properties Using Prior-Guided Deep Gaussian Processes [0.48370725368505757]
This study systematically evaluates the fitting performance of four prominent surrogate models conventional Gaussian Processes(cGP), Deep Gaussian Processes(DGP), encoder-decoder neural networks for multi-output regression and XGBoost applied to a hybrid dataset of experimental and computational properties in the AlCoCrCuFeMnNiV HEA system.
arXiv Detail & Related papers (2025-06-13T17:40:21Z) - SPIN: SE(3)-Invariant Physics Informed Network for Binding Affinity Prediction [3.406882192023597]
Accurate prediction of protein-ligand binding affinity is crucial for drug development.
Traditional methods often fail to accurately model the complex's spatial information.
We propose SPIN, a model that incorporates various inductive biases applicable to this task.
arXiv Detail & Related papers (2024-07-10T08:40:07Z) - GenBench: A Benchmarking Suite for Systematic Evaluation of Genomic Foundation Models [56.63218531256961]
We introduce GenBench, a benchmarking suite specifically tailored for evaluating the efficacy of Genomic Foundation Models.
GenBench offers a modular and expandable framework that encapsulates a variety of state-of-the-art methodologies.
We provide a nuanced analysis of the interplay between model architecture and dataset characteristics on task-specific performance.
arXiv Detail & Related papers (2024-06-01T08:01:05Z) - Integrated Gradient Correlation: a Dataset-wise Attribution Method [0.0]
We present a dataset-wise attribution method called Integrated Gradient Correlation (IGC)<n>IGC enables region-specific analysis by a direct summation over associated components, and further relates the sum of all attributions to a model prediction score (correlation)<n>We demonstrate IGC on synthetic data and fMRI neural signals (NSD dataset) with the study of the representation of image features in the brain.
arXiv Detail & Related papers (2024-04-22T06:42:21Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - Spatio-temporally separable non-linear latent factor learning: an
application to somatomotor cortex fMRI data [0.0]
Models of fMRI data that can perform whole-brain discovery of latent factors are understudied.
New methods for efficient spatial weight-sharing are critical to deal with the high dimensionality of the data and the presence of noise.
Our approach is evaluated on data with multiple motor sub-tasks to assess whether the model captures disentangled latent factors that correspond to each sub-task.
arXiv Detail & Related papers (2022-05-26T21:30:22Z) - Hybrid Feature- and Similarity-Based Models for Prediction and
Interpretation using Large-Scale Observational Data [0.0]
We propose a hybrid feature- and similarity-based model for supervised learning.
The proposed hybrid model is fit by convex optimization with a sparsity-inducing penalty on the kernel portion.
We compared our models to solely feature- and similarity-based approaches using synthetic data and using EHR data to predict risk of loneliness or social isolation.
arXiv Detail & Related papers (2022-04-12T20:37:03Z) - Functional Mixtures-of-Experts [0.24578723416255746]
We consider the statistical analysis of heterogeneous data for prediction in situations where the observations include functions.
We first present a new family of ME models, named functional ME (FME) in which the predictors are potentially noisy observations.
We develop dedicated expectation--maximization algorithms for Lasso-like (EM-Lasso) regularized maximum-likelihood parameter estimation strategies to fit the models.
arXiv Detail & Related papers (2022-02-04T17:32:28Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - Bayesian Sparse Factor Analysis with Kernelized Observations [67.60224656603823]
Multi-view problems can be faced with latent variable models.
High-dimensionality and non-linear issues are traditionally handled by kernel methods.
We propose merging both approaches into single model.
arXiv Detail & Related papers (2020-06-01T14:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.