A Hierarchy of Limitations in Machine Learning
- URL: http://arxiv.org/abs/2002.05193v2
- Date: Sat, 29 Feb 2020 21:04:27 GMT
- Title: A Hierarchy of Limitations in Machine Learning
- Authors: Momin M. Malik
- Abstract summary: This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society.
Modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them.
Consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: "All models are wrong, but some are useful", wrote George E. P. Box (1979).
Machine learning has focused on the usefulness of probability models for
prediction in social systems, but is only now coming to grips with the ways in
which these models are wrong---and the consequences of those shortcomings. This
paper attempts a comprehensive, structured overview of the specific conceptual,
procedural, and statistical limitations of models in machine learning when
applied to society. Machine learning modelers themselves can use the described
hierarchy to identify possible failure points and think through how to address
them, and consumers of machine learning models can know what to question when
confronted with the decision about if, where, and how to apply machine
learning. The limitations go from commitments inherent in quantification
itself, through to showing how unmodeled dependencies can lead to
cross-validation being overly optimistic as a way of assessing model
performance.
Related papers
- Transparency challenges in policy evaluation with causal machine learning -- improving usability and accountability [0.0]
There is no globally interpretable way to understand how a model makes estimates.
It is difficult to understand whether causal machine learning models are functioning in ways that are fair.
This paper explores why transparency issues are a problem for causal machine learning in public policy evaluation applications.
arXiv Detail & Related papers (2023-10-20T02:48:29Z) - Fairness Implications of Heterogeneous Treatment Effect Estimation with
Machine Learning Methods in Policy-making [0.0]
We argue that standard AI Fairness approaches for predictive machine learning are not suitable for all causal machine learning applications.
We argue that policy-making is best seen as a joint decision where the causal machine learning model usually only has indirect power.
arXiv Detail & Related papers (2023-09-02T03:06:14Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - Machine Learning with a Reject Option: A survey [18.43771007525432]
This survey aims to provide an overview on machine learning with rejection.
We introduce the conditions leading to two types of rejection, ambiguity and novelty rejection.
We review and categorize strategies to evaluate a model's predictive and rejective quality.
arXiv Detail & Related papers (2021-07-23T14:43:56Z) - Recurrence-Aware Long-Term Cognitive Network for Explainable Pattern
Classification [0.0]
We propose an LTCN-based model for interpretable pattern classification of structured data.
Our method brings its own mechanism for providing explanations by quantifying the relevance of each feature in the decision process.
Our interpretable model obtains competitive performance when compared to the state-of-the-art white and black boxes.
arXiv Detail & Related papers (2021-07-07T18:14:50Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Sufficiently Accurate Model Learning for Planning [119.80502738709937]
This paper introduces the constrained Sufficiently Accurate model learning approach.
It provides examples of such problems, and presents a theorem on how close some approximate solutions can be.
The approximate solution quality will depend on the function parameterization, loss and constraint function smoothness, and the number of samples in model learning.
arXiv Detail & Related papers (2021-02-11T16:27:31Z) - Choice modelling in the age of machine learning -- discussion paper [0.27998963147546135]
Cross-pollination of machine learning models, techniques and practices could help overcome problems and limitations encountered in the current theory-driven paradigm.
Despite the potential benefits of using the advances of machine learning to improve choice modelling practices, the choice modelling field has been hesitant to embrace machine learning.
arXiv Detail & Related papers (2021-01-28T11:57:08Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.