Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation
- URL: http://arxiv.org/abs/2104.04148v2
- Date: Mon, 12 Apr 2021 03:06:05 GMT
- Title: Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation
- Authors: Alfredo Carrillo, Luis F. Cant\'u, Luis Tejerina and Alejandro Noriega
- Abstract summary: Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
- Score: 63.18666008322476
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning methods are being increasingly applied in sensitive societal
contexts, where decisions impact human lives. Hence it has become necessary to
build capabilities for providing easily-interpretable explanations of models'
predictions. Recently in academic literature, a vast number of explanations
methods have been proposed. Unfortunately, to our knowledge, little has been
documented about the challenges machine learning practitioners most often face
when applying them in real-world scenarios. For example, a typical procedure
such as feature engineering can make some methodologies no longer applicable.
The present case study has two main objectives. First, to expose these
challenges and how they affect the use of relevant and novel explanations
methods. And second, to present a set of strategies that mitigate such
challenges, as faced when implementing explanation methods in a relevant
application domain -- poverty estimation and its use for prioritizing access to
social policies.
Related papers
- Transparency challenges in policy evaluation with causal machine learning -- improving usability and accountability [0.0]
There is no globally interpretable way to understand how a model makes estimates.
It is difficult to understand whether causal machine learning models are functioning in ways that are fair.
This paper explores why transparency issues are a problem for causal machine learning in public policy evaluation applications.
arXiv Detail & Related papers (2023-10-20T02:48:29Z) - Textual Explanations and Critiques in Recommendation Systems [8.406549970145846]
dissertation focuses on two fundamental challenges of addressing this need.
The first involves explanation generation in a scalable and data-driven manner.
The second challenge consists in making explanations actionable, and we refer to it as critiquing.
arXiv Detail & Related papers (2022-05-15T11:59:23Z) - Knowledge Augmented Machine Learning with Applications in Autonomous
Driving: A Survey [37.84106999449108]
This work provides an overview of existing techniques and methods that combine data-driven models with existing knowledge.
The identified approaches are structured according to the categories knowledge integration, extraction and conformity.
In particular, we address the application of the presented methods in the field of autonomous driving.
arXiv Detail & Related papers (2022-05-10T07:25:32Z) - Learning Physical Concepts in Cyber-Physical Systems: A Case Study [72.74318982275052]
We provide an overview of the current state of research regarding methods for learning physical concepts in time series data.
We also analyze the most important methods from the current state of the art using the example of a three-tank system.
arXiv Detail & Related papers (2021-11-28T14:24:52Z) - When and How to Fool Explainable Models (and Humans) with Adversarial
Examples [1.439518478021091]
We explore the possibilities and limits of adversarial attacks for explainable machine learning models.
First, we extend the notion of adversarial examples to fit in explainable machine learning scenarios.
Next, we propose a comprehensive framework to study whether adversarial examples can be generated for explainable models.
arXiv Detail & Related papers (2021-07-05T11:20:55Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Survey on Causal Inference [64.45536158710014]
Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics.
Various causal effect estimation methods for observational data have sprung up.
arXiv Detail & Related papers (2020-02-05T21:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.