On the reliability of feature attribution methods for speech classification
- URL: http://arxiv.org/abs/2505.16406v1
- Date: Thu, 22 May 2025 08:59:25 GMT
- Title: On the reliability of feature attribution methods for speech classification
- Authors: Gaofei Shen, Hosein Mohebbi, Arianna Bisazza, Afra Alishahi, Grzegorz ChrupaĆa,
- Abstract summary: We study how factors such as input type and aggregation and perturbation timespan impact the reliability of standard feature attribution methods.<n>We find that standard approaches to feature attribution are generally unreliable when applied to the speech domain.
- Score: 5.0727678479257685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the capabilities of large-scale pre-trained models evolve, understanding the determinants of their outputs becomes more important. Feature attribution aims to reveal which parts of the input elements contribute the most to model outputs. In speech processing, the unique characteristics of the input signal make the application of feature attribution methods challenging. We study how factors such as input type and aggregation and perturbation timespan impact the reliability of standard feature attribution methods, and how these factors interact with characteristics of each classification task. We find that standard approaches to feature attribution are generally unreliable when applied to the speech domain, with the exception of word-aligned perturbation methods when applied to word-based classification tasks.
Related papers
- Greedy feature selection: Classifier-dependent feature selection via
greedy methods [2.4374097382908477]
The purpose of this study is to introduce a new approach to feature ranking for classification tasks, called in what follows greedy feature selection.
The benefits of such scheme are investigated theoretically in terms of model capacity indicators, such as the Vapnik-Chervonenkis (VC) dimension or the kernel alignment.
arXiv Detail & Related papers (2024-03-08T08:12:05Z) - Prospector Heads: Generalized Feature Attribution for Large Models & Data [82.02696069543454]
We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods.
We demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data.
arXiv Detail & Related papers (2024-02-18T23:01:28Z) - Causal Feature Selection via Transfer Entropy [59.999594949050596]
Causal discovery aims to identify causal relationships between features with observational data.
We introduce a new causal feature selection approach that relies on the forward and backward feature selection procedures.
We provide theoretical guarantees on the regression and classification errors for both the exact and the finite-sample cases.
arXiv Detail & Related papers (2023-10-17T08:04:45Z) - Towards Procedural Fairness: Uncovering Biases in How a Toxic Language
Classifier Uses Sentiment Information [7.022948483613112]
This work is a step towards evaluating procedural fairness, where unfair processes lead to unfair outcomes.
The produced knowledge can guide debiasing techniques to ensure that important concepts besides identity terms are well-represented in training datasets.
arXiv Detail & Related papers (2022-10-19T16:03:25Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Feature Selection for Discovering Distributional Treatment Effect
Modifiers [37.09619678733784]
We propose a framework for finding features relevant to the difference in treatment effects.
We derive a feature importance measure that quantifies how strongly the feature attributes influence the discrepancy between potential outcome distributions.
We then develop a feature selection algorithm that can control the type I error rate to the desired level.
arXiv Detail & Related papers (2022-06-01T14:25:32Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Learning Causal Semantic Representation for Out-of-Distribution
Prediction [125.38836464226092]
We propose a Causal Semantic Generative model (CSG) based on a causal reasoning so that the two factors are modeled separately.
We show that CSG can identify the semantic factor by fitting training data, and this semantic-identification guarantees the boundedness of OOD generalization error.
arXiv Detail & Related papers (2020-11-03T13:16:05Z) - Fantastic Features and Where to Find Them: Detecting Cognitive
Impairment with a Subsequence Classification Guided Approach [6.063165888023164]
We describe a new approach to feature engineering that leverages sequential machine learning models and domain knowledge to predict which features help enhance performance.
We demonstrate that CI classification accuracy improves by 2.3% over a strong baseline when using features produced by this method.
arXiv Detail & Related papers (2020-10-13T17:57:18Z) - Nonparametric Feature Impact and Importance [0.6123324869194193]
We give mathematical definitions of feature impact and importance, derived from partial dependence curves, that operate directly on the data.
To assess quality, we show that features ranked by these definitions are competitive with existing feature selection techniques.
arXiv Detail & Related papers (2020-06-08T17:07:35Z) - Explaining Black Box Predictions and Unveiling Data Artifacts through
Influence Functions [55.660255727031725]
Influence functions explain the decisions of a model by identifying influential training examples.
We conduct a comparison between influence functions and common word-saliency methods on representative tasks.
We develop a new measure based on influence functions that can reveal artifacts in training data.
arXiv Detail & Related papers (2020-05-14T00:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.