Beyond Predictive Algorithms in Child Welfare
- URL: http://arxiv.org/abs/2403.05573v1
- Date: Mon, 26 Feb 2024 08:59:46 GMT
- Title: Beyond Predictive Algorithms in Child Welfare
- Authors: Erina Seh-Young Moon, Devansh Saxena, Tegan Maharaj, Shion Guha,
- Abstract summary: Caseworkers in the child welfare (CW) sector use predictive decision-making algorithms built on risk assessment (RA) data to guide and support CW decisions.
Our study finds that common risk metrics used to assess families and build CWS predictive risk models (PRMs) are unable to predict discharge outcomes for children who are not reunified with their birth parent(s)
Although casenotes cannot predict discharge outcomes, they contain contextual case signals.
- Score: 12.994514011716138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Caseworkers in the child welfare (CW) sector use predictive decision-making algorithms built on risk assessment (RA) data to guide and support CW decisions. Researchers have highlighted that RAs can contain biased signals which flatten CW case complexities and that the algorithms may benefit from incorporating contextually rich case narratives, i.e. - casenotes written by caseworkers. To investigate this hypothesized improvement, we quantitatively deconstructed two commonly used RAs from a United States CW agency. We trained classifier models to compare the predictive validity of RAs with and without casenote narratives and applied computational text analysis on casenotes to highlight topics uncovered in the casenotes. Our study finds that common risk metrics used to assess families and build CWS predictive risk models (PRMs) are unable to predict discharge outcomes for children who are not reunified with their birth parent(s). We also find that although casenotes cannot predict discharge outcomes, they contain contextual case signals. Given the lack of predictive validity of RA scores and casenotes, we propose moving beyond quantitative risk assessments for public sector algorithms and towards using contextual sources of information such as narratives to study public sociotechnical systems.
Related papers
- Risks and NLP Design: A Case Study on Procedural Document QA [52.557503571760215]
We argue that clearer assessments of risks and harms to users will be possible when we specialize the analysis to more concrete applications and their plausible users.
We conduct a risk-oriented error analysis that could then inform the design of a future system to be deployed with lower risk of harm and better performance.
arXiv Detail & Related papers (2024-08-16T17:23:43Z) - ECtHR-PCR: A Dataset for Precedent Understanding and Prior Case Retrieval in the European Court of Human Rights [1.3723120574076126]
We develop a prior case retrieval dataset based on judgements from the European Court of Human Rights (ECtHR)
We benchmark different lexical and dense retrieval approaches with various negative sampling strategies.
We find that difficulty-based negative sampling strategies were not effective for the PCR task.
arXiv Detail & Related papers (2024-03-31T08:06:54Z) - The Impact of Differential Feature Under-reporting on Algorithmic Fairness [86.275300739926]
We present an analytically tractable model of differential feature under-reporting.
We then use to characterize the impact of this kind of data bias on algorithmic fairness.
Our results show that, in real world data settings, under-reporting typically leads to increasing disparities.
arXiv Detail & Related papers (2024-01-16T19:16:22Z) - CASA: Causality-driven Argument Sufficiency Assessment [79.13496878681309]
We propose CASA, a zero-shot causality-driven argument sufficiency assessment framework.
PS measures how likely introducing the premise event would lead to the conclusion when both the premise and conclusion events are absent.
Experiments on two logical fallacy detection datasets demonstrate that CASA accurately identifies insufficient arguments.
arXiv Detail & Related papers (2024-01-10T16:21:18Z) - Examining risks of racial biases in NLP tools for child protective
services [78.81107364902958]
We focus on one such setting: child protective services (CPS)
Given well-established racial bias in this setting, we investigate possible ways deployed NLP is liable to increase racial disparities.
We document consistent algorithmic unfairness in NER models, possible algorithmic unfairness in coreference resolution models, and little evidence of exacerbated racial bias in risk prediction.
arXiv Detail & Related papers (2023-05-30T21:00:47Z) - A Unified Evaluation of Textual Backdoor Learning: Frameworks and
Benchmarks [72.7373468905418]
We develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning.
We also propose CUBE, a simple yet strong clustering-based defense baseline.
arXiv Detail & Related papers (2022-06-17T02:29:23Z) - Unpacking Invisible Work Practices, Constraints, and Latent Power
Relationships in Child Welfare through Casenote Analysis [3.739243122393041]
Caseworkers write detailed narratives about families in Child-Welfare (CW)
Casenotes offer a unique lens towards understanding the experiences of on-the-ground caseworkers.
This study offers the first computational inspection of casenotes and introduces them to the SIGCHI community.
arXiv Detail & Related papers (2022-03-10T05:48:22Z) - A Review of Adversarial Attack and Defense for Classification Methods [78.50824774203495]
This paper focuses on the generation and guarding of adversarial examples.
It is the hope of the authors that this paper will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.
arXiv Detail & Related papers (2021-11-18T22:13:43Z) - Predicting Early Dropout: Calibration and Algorithmic Fairness
Considerations [2.7048165023994057]
We develop a machine learning method to predict the risks of university dropout and underperformance.
We analyze if this method leads to discriminatory outcomes for some sensitive groups in terms of prediction accuracy (AUC) and error rates (Generalized False Positive Rate, GFPR, or Generalized False Negative Rate, GFNR)
arXiv Detail & Related papers (2021-03-16T13:42:16Z) - A Human-Centered Review of the Algorithms used within the U.S. Child
Welfare System [17.161947795238916]
U.S. Child Welfare System (CWS) is charged with improving outcomes for foster youth; yet, they are overburdened and underfunded.
Several states have turned towards algorithmic decision-making systems to reduce costs and determine better processes for improving CWS outcomes.
We synthesize 50 peer-reviewed publications on computational systems used in CWS to assess how they were being developed, common characteristics of predictors used, as well as the target outcomes.
arXiv Detail & Related papers (2020-03-07T09:16:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.