Fairness Implications of Heterogeneous Treatment Effect Estimation with
Machine Learning Methods in Policy-making
- URL: http://arxiv.org/abs/2309.00805v1
- Date: Sat, 2 Sep 2023 03:06:14 GMT
- Title: Fairness Implications of Heterogeneous Treatment Effect Estimation with
Machine Learning Methods in Policy-making
- Authors: Patrick Rehill and Nicholas Biddle
- Abstract summary: We argue that standard AI Fairness approaches for predictive machine learning are not suitable for all causal machine learning applications.
We argue that policy-making is best seen as a joint decision where the causal machine learning model usually only has indirect power.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Causal machine learning methods which flexibly generate heterogeneous
treatment effect estimates could be very useful tools for governments trying to
make and implement policy. However, as the critical artificial intelligence
literature has shown, governments must be very careful of unintended
consequences when using machine learning models. One way to try and protect
against unintended bad outcomes is with AI Fairness methods which seek to
create machine learning models where sensitive variables like race or gender do
not influence outcomes. In this paper we argue that standard AI Fairness
approaches developed for predictive machine learning are not suitable for all
causal machine learning applications because causal machine learning generally
(at least so far) uses modelling to inform a human who is the ultimate
decision-maker while AI Fairness approaches assume a model that is making
decisions directly. We define these scenarios as indirect and direct
decision-making respectively and suggest that policy-making is best seen as a
joint decision where the causal machine learning model usually only has
indirect power. We lay out a definition of fairness for this scenario - a model
that provides the information a decision-maker needs to accurately make a value
judgement about just policy outcomes - and argue that the complexity of causal
machine learning models can make this difficult to achieve. The solution here
is not traditional AI Fairness adjustments, but careful modelling and awareness
of some of the decision-making biases that these methods might encourage which
we describe.
Related papers
- Transparency challenges in policy evaluation with causal machine learning -- improving usability and accountability [0.0]
There is no globally interpretable way to understand how a model makes estimates.
It is difficult to understand whether causal machine learning models are functioning in ways that are fair.
This paper explores why transparency issues are a problem for causal machine learning in public policy evaluation applications.
arXiv Detail & Related papers (2023-10-20T02:48:29Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Distributional Instance Segmentation: Modeling Uncertainty and High
Confidence Predictions with Latent-MaskRCNN [77.0623472106488]
In this paper, we explore a class of distributional instance segmentation models using latent codes.
For robotic picking applications, we propose a confidence mask method to achieve the high precision necessary.
We show that our method can significantly reduce critical errors in robotic systems, including our newly released dataset of ambiguous scenes.
arXiv Detail & Related papers (2023-05-03T05:57:29Z) - Physics-Inspired Interpretability Of Machine Learning Models [0.0]
The ability to explain decisions made by machine learning models remains one of the most significant hurdles towards widespread adoption of AI.
We propose a novel approach to identify relevant features of the input data, inspired by methods from the energy landscapes field.
arXiv Detail & Related papers (2023-04-05T11:35:17Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - Randomized Classifiers vs Human Decision-Makers: Trustworthy AI May Have
to Act Randomly and Society Seems to Accept This [0.8889304968879161]
We feel that akin to human decisions, judgments of artificial agents should necessarily be grounded in some moral principles.
Yet a decision-maker can only make truly ethical (based on any ethical theory) and fair (according to any notion of fairness) decisions if full information on all the relevant factors on which the decision is based are available at the time of decision-making.
arXiv Detail & Related papers (2021-11-15T05:39:02Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Do the Machine Learning Models on a Crowd Sourced Platform Exhibit Bias?
An Empirical Study on Model Fairness [7.673007415383724]
We have created a benchmark of 40 top-rated models from Kaggle used for 5 different tasks.
We have applied 7 mitigation techniques on these models and analyzed the fairness, mitigation results, and impacts on performance.
arXiv Detail & Related papers (2020-05-21T23:35:53Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z) - A Hierarchy of Limitations in Machine Learning [0.0]
This paper attempts a comprehensive, structured overview of the specific conceptual, procedural, and statistical limitations of models in machine learning when applied to society.
Modelers themselves can use the described hierarchy to identify possible failure points and think through how to address them.
Consumers of machine learning models can know what to question when confronted with the decision about if, where, and how to apply machine learning.
arXiv Detail & Related papers (2020-02-12T19:39:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.