Hypothesis Testing and Machine Learning: Interpreting Variable Effects
in Deep Artificial Neural Networks using Cohen's f2
- URL: http://arxiv.org/abs/2302.01407v1
- Date: Thu, 2 Feb 2023 20:43:37 GMT
- Title: Hypothesis Testing and Machine Learning: Interpreting Variable Effects
in Deep Artificial Neural Networks using Cohen's f2
- Authors: Wolfgang Messner
- Abstract summary: Deep artificial neural networks show high predictive performance in many fields.
But they do not afford statistical inferences and their black-box operations are too complicated for humans to comprehend.
This article extends current XAI methods and develops a model agnostic hypothesis testing framework for machine learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep artificial neural networks show high predictive performance in many
fields, but they do not afford statistical inferences and their black-box
operations are too complicated for humans to comprehend. Because positing that
a relationship exists is often more important than prediction in scientific
experiments and research models, machine learning is far less frequently used
than inferential statistics. Additionally, statistics calls for improving the
test of theory by showing the magnitude of the phenomena being studied. This
article extends current XAI methods and develops a model agnostic hypothesis
testing framework for machine learning. First, Fisher's variable permutation
algorithm is tweaked to compute an effect size measure equivalent to Cohen's f2
for OLS regression models. Second, the Mann-Kendall test of monotonicity and
the Theil-Sen estimator is applied to Apley's accumulated local effect plots to
specify a variable's direction of influence and statistical significance. The
usefulness of this approach is demonstrated on an artificial data set and a
social survey with a Python sandbox implementation.
Related papers
- Information plane and compression-gnostic feedback in quantum machine learning [0.0]
The information plane has been proposed as an analytical tool for studying the learning dynamics of neural networks.
We study how the insight on how much the model compresses the input data can be used to improve a learning algorithm.
We benchmark the proposed learning algorithms on several classification and regression tasks using variational quantum circuits.
arXiv Detail & Related papers (2024-11-04T17:38:46Z) - Mechanism learning: Reverse causal inference in the presence of multiple unknown confounding through front-door causal bootstrapping [0.8901073744693314]
A major limitation of machine learning (ML) prediction models is that they recover associational, rather than causal, predictive relationships between variables.
This paper proposes mechanism learning, a simple method which uses front-door causal bootstrapping to deconfound observational data.
We test our method on fully synthetic, semi-synthetic and real-world datasets, demonstrating that it can discover reliable, unbiased, causal ML predictors.
arXiv Detail & Related papers (2024-10-26T03:34:55Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Learning Active Subspaces and Discovering Important Features with Gaussian Radial Basis Functions Neural Networks [0.0]
We show that precious information is contained in the spectrum of the precision matrix that can be extracted once the training of the model is completed.
We conducted numerical experiments for regression, classification, and feature selection tasks.
Our results demonstrate that the proposed model does not only yield an attractive prediction performance compared to the competitors.
arXiv Detail & Related papers (2023-07-11T09:54:30Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Interpretable models for extrapolation in scientific machine learning [0.0]
Complex machine learning algorithms often outperform simple regressions in interpolative settings.
We examine the trade-off between model performance and interpretability across a broad range of science and engineering problems.
arXiv Detail & Related papers (2022-12-16T19:33:28Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Hessian-based toolbox for reliable and interpretable machine learning in
physics [58.720142291102135]
We present a toolbox for interpretability and reliability, extrapolation of the model architecture.
It provides a notion of the influence of the input data on the prediction at a given test point, an estimation of the uncertainty of the model predictions, and an agnostic score for the model predictions.
Our work opens the road to the systematic use of interpretability and reliability methods in ML applied to physics and, more generally, science.
arXiv Detail & Related papers (2021-08-04T16:32:59Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Significance tests of feature relevance for a blackbox learner [6.72450543613463]
We derive two consistent tests for the feature relevance of a blackbox learner.
The first evaluates a loss difference with perturbation on an inference sample.
The second splits the inference sample into two but does not require data perturbation.
arXiv Detail & Related papers (2021-03-02T00:59:19Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.