Toward Improving Predictive Risk Modelling for New Zealand's Child
Welfare System Using Clustering Methods
- URL: http://arxiv.org/abs/2308.04060v1
- Date: Tue, 8 Aug 2023 05:46:03 GMT
- Title: Toward Improving Predictive Risk Modelling for New Zealand's Child
Welfare System Using Clustering Methods
- Authors: Sahar Barmomanesh and Victor Miranda-Soberanis
- Abstract summary: We aim to discover the extent of clustering degree required as an early step in the development of predictive risk models for child maltreatment.
Our results suggest that separate models might need to be developed for children of certain age to gain additional control over the error rates and to improve model accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The combination of clinical judgement and predictive risk models crucially
assist social workers to segregate children at risk of maltreatment and decide
when authorities should intervene. Predictive risk modelling to address this
matter has been initiated by several governmental welfare authorities worldwide
involving administrative data and machine learning algorithms. While previous
studies have investigated risk factors relating to child maltreatment, several
gaps remain as to understanding how such risk factors interact and whether
predictive risk models perform differently for children with different
features. By integrating Principal Component Analysis and K-Means clustering,
this paper presents initial findings of our work on the identification of such
features as well as their potential effect on current risk modelling
frameworks. This approach allows examining existent, unidentified yet, clusters
of New Zealand (NZ) children reported with care and protection concerns, as
well as to analyse their inner structure, and evaluate the performance of
prediction models trained cluster wise. We aim to discover the extent of
clustering degree required as an early step in the development of predictive
risk models for child maltreatment and so enhance the accuracy of such models
intended for use by child protection authorities. The results from testing
LASSO logistic regression models trained on identified clusters revealed no
significant difference in their performance. The models, however, performed
slightly better for two clusters including younger children. our results
suggest that separate models might need to be developed for children of certain
age to gain additional control over the error rates and to improve model
accuracy. While results are promising, more evidence is needed to draw
definitive conclusions, and further investigation is necessary.
Related papers
- A hierarchical approach for assessing the vulnerability of tree-based classification models to membership inference attack [0.552480439325792]
Machine learning models can inadvertently expose confidential properties of their training data, making them vulnerable to membership inference attacks (MIA)
This article presents two new complementary approaches for efficiently identifying vulnerable tree-based models.
arXiv Detail & Related papers (2025-02-13T15:16:53Z) - Predicting Preschoolers' Externalizing Problems with Mother-Child Interaction Dynamics and Deep Learning [7.323141824828041]
Existing studies have shown that mothers providing support in response to children's dysregulation was associated with children's lower levels of externalizing problems.
The current study aims to evaluate and improve the accuracy of predicting children's externalizing problems with mother-child interaction dynamics.
arXiv Detail & Related papers (2024-12-29T14:22:48Z) - Neural Lineage [56.34149480207817]
We introduce a novel task known as neural lineage detection, aiming at discovering lineage relationships between parent and child models.
For practical convenience, we introduce a learning-free approach, which integrates an approximation of the finetuning process into the neural network representation similarity metrics.
For the pursuit of accuracy, we introduce a learning-based lineage detector comprising encoders and a transformer detector.
arXiv Detail & Related papers (2024-06-17T01:11:53Z) - Towards a Transportable Causal Network Model Based on Observational
Healthcare Data [1.333879175460266]
We propose a novel approach that combines selection diagrams, missingness graphs, causal discovery and prior knowledge into a single graphical model.
We learn this model from data comprising two different cohorts of patients.
The resulting causal network model is validated by expert clinicians in terms of risk assessment, accuracy and explainability.
arXiv Detail & Related papers (2023-11-13T13:23:31Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Examining risks of racial biases in NLP tools for child protective
services [78.81107364902958]
We focus on one such setting: child protective services (CPS)
Given well-established racial bias in this setting, we investigate possible ways deployed NLP is liable to increase racial disparities.
We document consistent algorithmic unfairness in NER models, possible algorithmic unfairness in coreference resolution models, and little evidence of exacerbated racial bias in risk prediction.
arXiv Detail & Related papers (2023-05-30T21:00:47Z) - On (assessing) the fairness of risk score models [2.0646127669654826]
Risk models are of interest for a number of reasons, including the fact that they communicate uncertainty about the potential outcomes to users.
We identify the provision of similar value to different groups as a key desideratum for risk score fairness.
We introduce a novel calibration error metric that is less sample size-biased than previously proposed metrics.
arXiv Detail & Related papers (2023-02-17T12:45:51Z) - Are Neural Topic Models Broken? [81.15470302729638]
We study the relationship between automated and human evaluation of topic models.
We find that neural topic models fare worse in both respects compared to an established classical method.
arXiv Detail & Related papers (2022-10-28T14:38:50Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - The Consequences of the Framing of Machine Learning Risk Prediction
Models: Evaluation of Sepsis in General Wards [0.0]
We evaluate how framing affects model performance and model learning in four different approaches.
We analysed structured secondary healthcare data from 221,283 citizens from four Danish municipalities.
arXiv Detail & Related papers (2021-01-26T14:00:05Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.