Reducing the Filtering Effect in Public School Admissions: A Bias-aware Analysis for Targeted Interventions
- URL: http://arxiv.org/abs/2004.10846v4
- Date: Mon, 15 Jul 2024 21:49:48 GMT
- Title: Reducing the Filtering Effect in Public School Admissions: A Bias-aware Analysis for Targeted Interventions
- Authors: Yuri Faenza, Swati Gupta, Aapeli Vuorinen, Xuan Zhang,
- Abstract summary: We show that there is a shift in the distribution of scores obtained by students that the DOE classifies as "disadvantaged"
We show that centrally planned interventions can significantly reduce the impact of bias through scholarships or training.
- Score: 7.50215102665518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Problem definition: Traditionally, New York City's top 8 public schools have selected candidates solely based on their scores in the Specialized High School Admissions Test (SHSAT). These scores are known to be impacted by socioeconomic status of students and test preparation received in middle schools, leading to a massive filtering effect in the education pipeline. The classical mechanisms for assigning students to schools do not naturally address problems like school segregation and class diversity, which have worsened over the years. The scientific community, including policymakers, have reacted by incorporating group-specific quotas and proportionality constraints, with mixed results. The problem of finding effective and fair methods for broadening access to top-notch education is still unsolved. Methodology/results: We take an operations approach to the problem different from most established literature, with the goal of increasing opportunities for students with high economic needs. Using data from the Department of Education (DOE) in New York City, we show that there is a shift in the distribution of scores obtained by students that the DOE classifies as "disadvantaged" (following criteria mostly based on economic factors). We model this shift as a "bias" that results from an underestimation of the true potential of disadvantaged students. We analyze the impact this bias has on an assortative matching market. We show that centrally planned interventions can significantly reduce the impact of bias through scholarships or training, when they target the segment of disadvantaged students with average performance.
Related papers
- Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Exploring Educational Equity: A Machine Learning Approach to Unravel
Achievement Disparities in Georgia [0.5439020425819]
The study conducts a comprehensive analysis of student achievement rates across different demographics, regions, and subjects.
The findings highlight a significant decline in proficiency in English and Math during the pandemic.
The study also identifies disparities in achievement rates between urban and rural settings, as well as variations across counties.
arXiv Detail & Related papers (2024-01-25T15:05:52Z) - GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language
Models [83.30078426829627]
Large language models (LLMs) have gained popularity and are being widely adopted by a large user community.
The existing evaluation methods have many constraints, and their results exhibit a limited degree of interpretability.
We propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs to assess bias in models.
arXiv Detail & Related papers (2023-12-11T12:02:14Z) - Difficult Lessons on Social Prediction from Wisconsin Public Schools [32.90759447739759]
Early warning systems assist in targeting interventions to individual students by predicting which students are at risk of dropping out.
Despite significant investments in their widespread adoption, there remain large gaps in our understanding of the efficacy of EWS.
We present empirical evidence that the prediction system accurately sorts students by their dropout risk.
We find that it may have caused a single-digit percentage increase in graduation rates, though our empirical analyses cannot reliably rule out that there has been no positive treatment effect.
arXiv Detail & Related papers (2023-04-13T00:59:12Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Fairness-aware Class Imbalanced Learning [57.45784950421179]
We evaluate long-tail learning methods for tweet sentiment and occupation classification.
We extend a margin-loss based approach with methods to enforce fairness.
arXiv Detail & Related papers (2021-09-21T22:16:30Z) - Intersectional Affirmative Action Policies for Top-k Candidates
Selection [3.4961413413444817]
We study the problem of selecting the top-k candidates from a pool of applicants, where each candidate is associated with a score indicating his/her aptitude.
We consider a situation in which some groups of candidates experience historical and present disadvantage that makes their chances of being accepted much lower than other groups.
We propose two algorithms to solve this problem, analyze them, and evaluate them experimentally using a dataset of university application scores and admissions to bachelor degrees in an OECD country.
arXiv Detail & Related papers (2020-07-29T12:27:18Z) - Interventions for Ranking in the Presence of Implicit Bias [34.23230188778088]
Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group.
Rooney Rule is a constraint to improve the utility of the outcome for certain cases of the subset selection problem.
We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias.
arXiv Detail & Related papers (2020-01-23T19:11:31Z) - Graduate Employment Prediction with Bias [44.38256197478875]
Failure of landing a job for college students could cause serious social consequences such as drunkenness and suicide.
We develop a framework, i.e., MAYA, to predict students' employment status while considering biases.
arXiv Detail & Related papers (2019-12-27T07:30:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.