Modeling Generalization in Machine Learning: A Methodological and
Computational Study
- URL: http://arxiv.org/abs/2006.15680v1
- Date: Sun, 28 Jun 2020 19:06:16 GMT
- Title: Modeling Generalization in Machine Learning: A Methodological and
Computational Study
- Authors: Pietro Barbiero and Giovanni Squillero and Alberto Tonda
- Abstract summary: We use the concept of the convex hull of the training data in assessing machine learning generalization.
We observe unexpectedly weak associations between the generalization ability of machine learning models and all metrics related to dimensionality.
- Score: 0.8057006406834467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning becomes more and more available to the general public,
theoretical questions are turning into pressing practical issues. Possibly, one
of the most relevant concerns is the assessment of our confidence in trusting
machine learning predictions. In many real-world cases, it is of utmost
importance to estimate the capabilities of a machine learning algorithm to
generalize, i.e., to provide accurate predictions on unseen data, depending on
the characteristics of the target problem. In this work, we perform a
meta-analysis of 109 publicly-available classification data sets, modeling
machine learning generalization as a function of a variety of data set
characteristics, ranging from number of samples to intrinsic dimensionality,
from class-wise feature skewness to $F1$ evaluated on test samples falling
outside the convex hull of the training set. Experimental results demonstrate
the relevance of using the concept of the convex hull of the training data in
assessing machine learning generalization, by emphasizing the difference
between interpolated and extrapolated predictions. Besides several predictable
correlations, we observe unexpectedly weak associations between the
generalization ability of machine learning models and all metrics related to
dimensionality, thus challenging the common assumption that the \textit{curse
of dimensionality} might impair generalization in machine learning.
Related papers
- Fair Mixed Effects Support Vector Machine [0.0]
Fairness in machine learning aims to mitigate biases present in the training data and model imperfections.
This is achieved by preventing the model from making decisions based on sensitive characteristics like ethnicity or sexual orientation.
We present a fair mixed effects support vector machine algorithm that can handle both problems simultaneously.
arXiv Detail & Related papers (2024-05-10T12:25:06Z) - Machine Learning vs Deep Learning: The Generalization Problem [0.0]
This study investigates the comparative abilities of traditional machine learning (ML) models and deep learning (DL) algorithms in terms of extrapolation.
We present an empirical analysis where both ML and DL models are trained on an exponentially growing function and then tested on values outside the training domain.
Our findings suggest that deep learning models possess inherent capabilities to generalize beyond the training scope.
arXiv Detail & Related papers (2024-03-03T21:42:55Z) - Understanding Generalization of Federated Learning via Stability:
Heterogeneity Matters [1.4502611532302039]
Generalization performance is a key metric in evaluating machine learning models when applied to real-world applications.
Generalization performance is a key metric in evaluating machine learning models when applied to real-world applications.
arXiv Detail & Related papers (2023-06-06T16:12:35Z) - Assessing the Generalizability of a Performance Predictive Model [0.6070952062639761]
We propose a workflow to estimate the generalizability of a predictive model for algorithm performance.
The results show that generalizability patterns in the landscape feature space are reflected in the performance space.
arXiv Detail & Related papers (2023-05-31T12:50:44Z) - Matched Machine Learning: A Generalized Framework for Treatment Effect
Inference With Learned Metrics [87.05961347040237]
We introduce Matched Machine Learning, a framework that combines the flexibility of machine learning black boxes with the interpretability of matching.
Our framework uses machine learning to learn an optimal metric for matching units and estimating outcomes.
We show empirically that instances of Matched Machine Learning perform on par with black-box machine learning methods and better than existing matching methods for similar problems.
arXiv Detail & Related papers (2023-04-03T19:32:30Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Amortized Inference for Causal Structure Learning [72.84105256353801]
Learning causal structure poses a search problem that typically involves evaluating structures using a score or independence test.
We train a variational inference model to predict the causal structure from observational/interventional data.
Our models exhibit robust generalization capabilities under substantial distribution shift.
arXiv Detail & Related papers (2022-05-25T17:37:08Z) - Discriminative, Generative and Self-Supervised Approaches for
Target-Agnostic Learning [8.666667951130892]
generative and self-supervised learning models are shown to perform well at the task.
Our derived theorem for the pseudo-likelihood theory also shows that they are related for inferring a joint distribution model.
arXiv Detail & Related papers (2020-11-12T15:03:40Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Machine learning for causal inference: on the use of cross-fit
estimators [77.34726150561087]
Doubly-robust cross-fit estimators have been proposed to yield better statistical properties.
We conducted a simulation study to assess the performance of several estimators for the average causal effect (ACE)
When used with machine learning, the doubly-robust cross-fit estimators substantially outperformed all of the other estimators in terms of bias, variance, and confidence interval coverage.
arXiv Detail & Related papers (2020-04-21T23:09:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.