A Validity Perspective on Evaluating the Justified Use of Data-driven
Decision-making Algorithms
- URL: http://arxiv.org/abs/2206.14983v2
- Date: Tue, 14 Feb 2023 15:24:44 GMT
- Title: A Validity Perspective on Evaluating the Justified Use of Data-driven
Decision-making Algorithms
- Authors: Amanda Coston, Anna Kawakami, Haiyi Zhu, Ken Holstein, and Hoda
Heidari
- Abstract summary: We apply the lens of validity to re-examine challenges in problem formulation and data issues that jeopardize the justifiability of using predictive algorithms.
We demonstrate how these validity considerations could distill into a series of high-level questions intended to promote and document reflections on the legitimacy of the predictive task and the suitability of the data.
- Score: 14.96024118861361
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research increasingly brings to question the appropriateness of using
predictive tools in complex, real-world tasks. While a growing body of work has
explored ways to improve value alignment in these tools, comparatively less
work has centered concerns around the fundamental justifiability of using these
tools. This work seeks to center validity considerations in deliberations
around whether and how to build data-driven algorithms in high-stakes domains.
Toward this end, we translate key concepts from validity theory to predictive
algorithms. We apply the lens of validity to re-examine common challenges in
problem formulation and data issues that jeopardize the justifiability of using
predictive algorithms and connect these challenges to the social science
discourse around validity. Our interdisciplinary exposition clarifies how these
concepts apply to algorithmic decision making contexts. We demonstrate how
these validity considerations could distill into a series of high-level
questions intended to promote and document reflections on the legitimacy of the
predictive task and the suitability of the data.
Related papers
- Relevance-aware Algorithmic Recourse [3.6141428739228894]
Algorithmic recourse emerges as a tool for clarifying decisions made by predictive models.
Current algorithmic recourse methods treat all domain values equally, which is unrealistic in real-world settings.
We propose a novel framework, Relevance-Aware Algorithmic Recourse (RAAR), that leverages the concept of relevance in applying algorithmic recourse to regression tasks.
arXiv Detail & Related papers (2024-05-29T13:25:49Z) - Deep Learning-Based Object Pose Estimation: A Comprehensive Survey [73.74933379151419]
We discuss the recent advances in deep learning-based object pose estimation.
Our survey also covers multiple input data modalities, degrees-of-freedom of output poses, object properties, and downstream tasks.
arXiv Detail & Related papers (2024-05-13T14:44:22Z) - Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment [76.04306818209753]
We introduce a substantial crowdsourcing annotation dataset collected from a real-world crowdsourcing platform.
This dataset comprises approximately two thousand workers, one million tasks, and six million annotations.
We evaluate the effectiveness of several representative truth inference algorithms on this dataset.
arXiv Detail & Related papers (2024-03-10T16:00:41Z) - On Responsible Machine Learning Datasets with Fairness, Privacy, and
Regulatory Norms [58.93352076927003]
There have been severe concerns over the trustworthiness of AI technologies.
Machine and deep learning algorithms depend heavily on the data used during their development.
We propose a framework to evaluate the datasets through a responsible rubric.
arXiv Detail & Related papers (2023-10-24T14:01:53Z) - Joint Communication and Computation Framework for Goal-Oriented Semantic
Communication with Distortion Rate Resilience [13.36706909571975]
We use the rate-distortion theory to analyze distortions induced by communication and semantic compression.
We can preemptively estimate the empirical accuracy of AI tasks, making the goal-oriented semantic communication problem feasible.
arXiv Detail & Related papers (2023-09-26T00:26:29Z) - A Human-Centered Review of Algorithms in Decision-Making in Higher
Education [16.578096382702597]
We reviewed an extensive corpus of papers proposing algorithms for decision-making in higher education.
We found that the models are trending towards deep learning, and increased use of student personal data and protected attributes.
Despite the associated decrease in interpretability and explainability, current development predominantly fails to incorporate human-centered lenses.
arXiv Detail & Related papers (2023-02-12T02:30:50Z) - Evaluation Methods and Measures for Causal Learning Algorithms [33.07234268724662]
We focus on the two fundamental causal-inference tasks and causality-aware machine learning tasks.
The survey seeks to bring to the forefront the urgency of developing publicly available benchmarks and consensus-building standards for causal learning evaluation with observational data.
arXiv Detail & Related papers (2022-02-07T00:24:34Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - URSABench: Comprehensive Benchmarking of Approximate Bayesian Inference
Methods for Deep Neural Networks [15.521736934292354]
Deep learning methods continue to improve in predictive accuracy on a wide range of application domains.
Recent advances in approximate Bayesian inference hold significant promise for addressing these concerns.
We describe initial work on the development ofURSABench, an open-source suite of bench-marking tools for comprehensive assessment of approximate Bayesian inference methods.
arXiv Detail & Related papers (2020-07-08T22:51:28Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.