Imagining new futures beyond predictive systems in child welfare: A
qualitative study with impacted stakeholders
- URL: http://arxiv.org/abs/2205.08928v1
- Date: Wed, 18 May 2022 13:49:55 GMT
- Title: Imagining new futures beyond predictive systems in child welfare: A
qualitative study with impacted stakeholders
- Authors: Logan Stapleton, Min Hun Lee, Diana Qing, Marya Wright, Alexandra
Chouldechova, Kenneth Holstein, Zhiwei Steven Wu, Haiyi Zhu
- Abstract summary: We conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system.
We found that participants worried current PRMs perpetuate or exacerbate existing problems in child welfare.
Participants suggested new ways to use data and data-driven tools to better support impacted communities.
- Score: 89.6319385008397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Child welfare agencies across the United States are turning to data-driven
predictive technologies (commonly called predictive analytics) which use
government administrative data to assist workers' decision-making. While some
prior work has explored impacted stakeholders' concerns with current uses of
data-driven predictive risk models (PRMs), less work has asked stakeholders
whether such tools ought to be used in the first place. In this work, we
conducted a set of seven design workshops with 35 stakeholders who have been
impacted by the child welfare system or who work in it to understand their
beliefs and concerns around PRMs, and to engage them in imagining new uses of
data and technologies in the child welfare system. We found that participants
worried current PRMs perpetuate or exacerbate existing problems in child
welfare. Participants suggested new ways to use data and data-driven tools to
better support impacted communities and suggested paths to mitigate possible
harms of these tools. Participants also suggested low-tech or no-tech
alternatives to PRMs to address problems in child welfare. Our study sheds
light on how researchers and designers can work in solidarity with impacted
communities, possibly to circumvent or oppose child welfare agencies.
Related papers
- LLM-based Interaction for Content Generation: A Case Study on the
Perception of Employees in an IT department [85.1523466539595]
This paper presents a questionnaire survey to identify the intention to use generative tools by employees of an IT company.
Our results indicate a rather average acceptability of generative tools, although the more useful the tool is perceived to be, the higher the intention seems to be.
Our analyses suggest that the frequency of use of generative tools is likely to be a key factor in understanding how employees perceive these tools in the context of their work.
arXiv Detail & Related papers (2023-04-18T15:35:43Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Mutual Information Scoring: Increasing Interpretability in Categorical
Clustering Tasks with Applications to Child Welfare Data [6.651036327739043]
Youth in the American foster care system are significantly more likely than their peers to face a number of negative life outcomes.
Data on these youth have the potential to provide insights that can help identify ways to improve their path towards a better life.
The present work proposes a novel, prescriptive approach to using these data to provide insights about both data biases and the systems and youth they track.
arXiv Detail & Related papers (2022-08-03T01:11:09Z) - A Conceptual Framework for Using Machine Learning to Support Child
Welfare Decisions [5.1760162371179]
This paper describes a conceptual framework for machine learning to support child welfare decisions.
Ethical considerations, stakeholder engagement, and avoidance of common pitfalls underpin the framework's impact and success.
arXiv Detail & Related papers (2022-07-12T21:42:22Z) - Improving Human-AI Partnerships in Child Welfare: Understanding Worker
Practices, Challenges, and Desires for Algorithmic Decision Support [37.03030554731032]
We present findings from a series of interviews at a child welfare agency, to understand how they currently make AI-assisted child maltreatment screening decisions.
We observe how workers' reliance upon the ADS is guided by (1) their knowledge of rich, contextual information beyond what the AI model captures, (2) their beliefs about the ADS's capabilities and limitations relative to their own, and (4) awareness of misalignments between algorithmic predictions and their own decision-making objectives.
arXiv Detail & Related papers (2022-04-05T16:10:49Z) - Proposing an Interactive Audit Pipeline for Visual Privacy Research [0.0]
We argue for the use of fairness to discover bias and fairness issues in systems, assert the need for a responsible human-over-the-loop, and reflect on the need to explore research agendas that have harmful societal impacts.
Our goal is to provide a systematic analysis of the machine learning pipeline for visual privacy and bias issues.
arXiv Detail & Related papers (2021-11-07T01:51:43Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Robots in the Danger Zone: Exploring Public Perception through
Engagement [4.051559940977775]
Public perceptions of Robotics and Artificial Intelligence (RAI) are important in the acceptance, uptake, government regulation and research funding.
Recent research has shown that the public's understanding of RAI can be negative or inaccurate.
We describe our first iteration of a high throughput in-person public engagement activity.
arXiv Detail & Related papers (2020-04-01T20:10:53Z) - A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous
Algorithmic Scores [85.12096045419686]
We study the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions.
We first show that humans do alter their behavior when the tool is deployed.
We show that humans are less likely to adhere to the machine's recommendation when the score displayed is an incorrect estimate of risk.
arXiv Detail & Related papers (2020-02-19T07:27:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.