Explainable Machine Learning for Public Policy: Use Cases, Gaps, and
Research Directions
- URL: http://arxiv.org/abs/2010.14374v2
- Date: Thu, 4 Mar 2021 23:52:46 GMT
- Title: Explainable Machine Learning for Public Policy: Use Cases, Gaps, and
Research Directions
- Authors: Kasun Amarasinghe, Kit Rodolfa, Hemank Lamba, Rayid Ghani
- Abstract summary: We develop a taxonomy of explainability use-cases within public policy problems.
We define the end-users of explanations and the specific goals explainability has to fulfill.
We map existing work to these use-cases, identify gaps, and propose research directions to fill those gaps.
- Score: 6.68777133358979
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainability is a crucial requirement for effectiveness as well as the
adoption of Machine Learning (ML) models supporting decisions in high-stakes
public policy areas such as health, criminal justice, education, and
employment, While the field of explainable has expanded in recent years, much
of this work has not taken real-world needs into account. A majority of
proposed methods use benchmark datasets with generic explainability goals
without clear use-cases or intended end-users. As a result, the applicability
and effectiveness of this large body of theoretical and methodological work on
real-world applications is unclear. This paper focuses on filling this void for
the domain of public policy. We develop a taxonomy of explainability use-cases
within public policy problems; for each use-case, we define the end-users of
explanations and the specific goals explainability has to fulfill; third, we
map existing work to these use-cases, identify gaps, and propose research
directions to fill those gaps in order to have a practical societal impact
through ML.
Related papers
- Towards Coarse-to-Fine Evaluation of Inference Efficiency for Large Language Models [95.96734086126469]
Large language models (LLMs) can serve as the assistant to help users accomplish their jobs, and also support the development of advanced applications.
For the wide application of LLMs, the inference efficiency is an essential concern, which has been widely studied in existing work.
We perform a detailed coarse-to-fine analysis of the inference performance of various code libraries.
arXiv Detail & Related papers (2024-04-17T15:57:50Z) - Techniques for Measuring the Inferential Strength of Forgetting Policies [0.3069335774032178]
This paper defines loss functions for measuring changes in inferential strength based on intuitions from model counting and probability theory.
Although the focus is on forgetting, the results are much more general and should have wider application to other areas.
arXiv Detail & Related papers (2024-04-03T04:50:43Z) - From Understanding to Utilization: A Survey on Explainability for Large
Language Models [27.295767173801426]
This survey underscores the imperative for increased explainability in Large Language Models (LLMs)
Our focus is primarily on pre-trained Transformer-based LLMs, which pose distinctive interpretability challenges due to their scale and complexity.
When considering the utilization of explainability, we explore several compelling methods that concentrate on model editing, control generation, and model enhancement.
arXiv Detail & Related papers (2024-01-23T16:09:53Z) - Knowledge Plugins: Enhancing Large Language Models for Domain-Specific
Recommendations [50.81844184210381]
We propose a general paradigm that augments large language models with DOmain-specific KnowledgE to enhance their performance on practical applications, namely DOKE.
This paradigm relies on a domain knowledge extractor, working in three steps: 1) preparing effective knowledge for the task; 2) selecting the knowledge for each specific sample; and 3) expressing the knowledge in an LLM-understandable way.
arXiv Detail & Related papers (2023-11-16T07:09:38Z) - Transparency challenges in policy evaluation with causal machine learning -- improving usability and accountability [0.0]
There is no globally interpretable way to understand how a model makes estimates.
It is difficult to understand whether causal machine learning models are functioning in ways that are fair.
This paper explores why transparency issues are a problem for causal machine learning in public policy evaluation applications.
arXiv Detail & Related papers (2023-10-20T02:48:29Z) - Unveiling the Potential of Counterfactuals Explanations in Employability [0.0]
We show how counterfactuals are applied to employability-related problems involving machine learning algorithms.
The use cases presented go beyond the mere application of counterfactuals as explanations.
arXiv Detail & Related papers (2023-05-17T09:13:53Z) - DisCo RL: Distribution-Conditioned Reinforcement Learning for
General-Purpose Policies [116.12670064963625]
We develop an off-policy algorithm called distribution-conditioned reinforcement learning (DisCo RL) to efficiently learn contextual policies.
We evaluate DisCo RL on a variety of robot manipulation tasks and find that it significantly outperforms prior methods on tasks that require generalization to new goal distributions.
arXiv Detail & Related papers (2021-04-23T16:51:58Z) - Individual Explanations in Machine Learning Models: A Case Study on
Poverty Estimation [63.18666008322476]
Machine learning methods are being increasingly applied in sensitive societal contexts.
The present case study has two main objectives. First, to expose these challenges and how they affect the use of relevant and novel explanations methods.
And second, to present a set of strategies that mitigate such challenges, as faced when implementing explanation methods in a relevant application domain.
arXiv Detail & Related papers (2021-04-09T01:54:58Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Generalized Inverse Planning: Learning Lifted non-Markovian Utility for
Generalizable Task Representation [83.55414555337154]
In this work, we study learning such utility from human demonstrations.
We propose a new quest, Generalized Inverse Planning, for utility learning in this domain.
We outline a computational framework, Maximum Entropy Inverse Planning (MEIP), that learns non-Markovian utility and associated concepts in a generative manner.
arXiv Detail & Related papers (2020-11-12T21:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.