A Conceptual Framework for Using Machine Learning to Support Child
Welfare Decisions
- URL: http://arxiv.org/abs/2207.05855v1
- Date: Tue, 12 Jul 2022 21:42:22 GMT
- Title: A Conceptual Framework for Using Machine Learning to Support Child
Welfare Decisions
- Authors: Ka Ho Brian Chor, Kit T. Rodolfa, Rayid Ghani
- Abstract summary: This paper describes a conceptual framework for machine learning to support child welfare decisions.
Ethical considerations, stakeholder engagement, and avoidance of common pitfalls underpin the framework's impact and success.
- Score: 5.1760162371179
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human services systems make key decisions that impact individuals in the
society. The U.S. child welfare system makes such decisions, from screening-in
hotline reports of suspected abuse or neglect for child protective
investigations, placing children in foster care, to returning children to
permanent home settings. These complex and impactful decisions on children's
lives rely on the judgment of child welfare decisionmakers. Child welfare
agencies have been exploring ways to support these decisions with empirical,
data-informed methods that include machine learning (ML). This paper describes
a conceptual framework for ML to support child welfare decisions. The ML
framework guides how child welfare agencies might conceptualize a target
problem that ML can solve; vet available administrative data for building ML;
formulate and develop ML specifications that mirror relevant populations and
interventions the agencies are undertaking; deploy, evaluate, and monitor ML as
child welfare context, policy, and practice change over time. Ethical
considerations, stakeholder engagement, and avoidance of common pitfalls
underpin the framework's impact and success. From abstract to concrete, we
describe one application of this framework to support a child welfare decision.
This ML framework, though child welfare-focused, is generalizable to solving
other public policy problems.
Related papers
- Unbiasing on the Fly: Explanation-Guided Human Oversight of Machine Learning System Decisions [4.24106429730184]
We propose a novel framework for on-the-fly tracking and correction of discrimination in deployed ML systems.
The framework continuously monitors the predictions made by an ML system and flags discriminatory outcomes.
This human-in-the-loop approach empowers reviewers to accept or override the ML system decision.
arXiv Detail & Related papers (2024-06-25T19:40:55Z) - I-SIRch: AI-Powered Concept Annotation Tool For Equitable Extraction And Analysis Of Safety Insights From Maternity Investigations [0.8609957371651683]
Most current tools for analysing healthcare data focus only on biomedical concepts, overlooking the importance of human factors.
We developed I-SIRch, using artificial intelligence to automatically identify and label human factors concepts.
I-SIRch was trained using real data and tested on both real and simulated data to evaluate its performance in identifying human factors concepts.
arXiv Detail & Related papers (2024-06-08T16:05:31Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Imagining new futures beyond predictive systems in child welfare: A
qualitative study with impacted stakeholders [89.6319385008397]
We conducted a set of seven design workshops with 35 stakeholders who have been impacted by the child welfare system.
We found that participants worried current PRMs perpetuate or exacerbate existing problems in child welfare.
Participants suggested new ways to use data and data-driven tools to better support impacted communities.
arXiv Detail & Related papers (2022-05-18T13:49:55Z) - Improving Human-AI Partnerships in Child Welfare: Understanding Worker
Practices, Challenges, and Desires for Algorithmic Decision Support [37.03030554731032]
We present findings from a series of interviews at a child welfare agency, to understand how they currently make AI-assisted child maltreatment screening decisions.
We observe how workers' reliance upon the ADS is guided by (1) their knowledge of rich, contextual information beyond what the AI model captures, (2) their beliefs about the ADS's capabilities and limitations relative to their own, and (4) awareness of misalignments between algorithmic predictions and their own decision-making objectives.
arXiv Detail & Related papers (2022-04-05T16:10:49Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Unpacking Invisible Work Practices, Constraints, and Latent Power
Relationships in Child Welfare through Casenote Analysis [3.739243122393041]
Caseworkers write detailed narratives about families in Child-Welfare (CW)
Casenotes offer a unique lens towards understanding the experiences of on-the-ground caseworkers.
This study offers the first computational inspection of casenotes and introduces them to the SIGCHI community.
arXiv Detail & Related papers (2022-03-10T05:48:22Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.