Evaluating a Learned Admission-Prediction Model as a Replacement for
Standardized Tests in College Admissions
- URL: http://arxiv.org/abs/2302.03610v3
- Date: Tue, 23 May 2023 17:18:51 GMT
- Title: Evaluating a Learned Admission-Prediction Model as a Replacement for
Standardized Tests in College Admissions
- Authors: Hansol Lee, Ren\'e F. Kizilcec, Thorsten Joachims
- Abstract summary: College admissions offices have historically relied on standardized test scores to organize large applicant pools into viable subsets for review.
We explore a machine learning-based approach to replace the role of standardized tests in subset generation.
We find that a prediction model trained on past admission data outperforms an SAT-based model and matches the demographic composition of the last admitted class.
- Score: 21.70450099249114
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A growing number of college applications has presented an annual challenge
for college admissions in the United States. Admission offices have
historically relied on standardized test scores to organize large applicant
pools into viable subsets for review. However, this approach may be subject to
bias in test scores and selection bias in test-taking with recent trends toward
test-optional admission. We explore a machine learning-based approach to
replace the role of standardized tests in subset generation while taking into
account a wide range of factors extracted from student applications to support
a more holistic review. We evaluate the approach on data from an undergraduate
admission office at a selective US institution (13,248 applications). We find
that a prediction model trained on past admission data outperforms an SAT-based
heuristic and matches the demographic composition of the last admitted class.
We discuss the risks and opportunities for how such a learned model could be
leveraged to support human decision-making in college admissions.
Related papers
- Context-Aware Testing: A New Paradigm for Model Testing with Large Language Models [49.06068319380296]
We introduce context-aware testing (CAT) which uses context as an inductive bias to guide the search for meaningful model failures.
We instantiate the first CAT system, SMART Testing, which employs large language models to hypothesize relevant and likely failures.
arXiv Detail & Related papers (2024-10-31T15:06:16Z) - Algorithms for College Admissions Decision Support: Impacts of Policy Change and Inherent Variability [18.289154814012996]
We show that removing race data from a developed applicant ranking algorithm reduces the diversity of the top-ranked pool without meaningfully increasing the academic merit of that pool.
We measure the impact of policy change on individuals by comparing the arbitrariness in applicant rank attributable to policy change to the arbitrariness attributable to randomness.
arXiv Detail & Related papers (2024-06-24T14:59:30Z) - VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large Vision-Language Model [72.13121434085116]
VLBiasBench is a benchmark aimed at evaluating biases in Large Vision-Language Models (LVLMs)
We construct a dataset encompassing nine distinct categories of social biases, including age, disability status, gender, nationality, physical appearance, race, religion, profession, social economic status and two intersectional bias categories (race x gender, and race x social economic status)
We conduct extensive evaluations on 15 open-source models as well as one advanced closed-source model, providing some new insights into the biases revealing from these models.
arXiv Detail & Related papers (2024-06-20T10:56:59Z) - Towards Personalized Evaluation of Large Language Models with An
Anonymous Crowd-Sourcing Platform [64.76104135495576]
We propose a novel anonymous crowd-sourcing evaluation platform, BingJian, for large language models.
Through this platform, users have the opportunity to submit their questions, testing the models on a personalized and potentially broader range of capabilities.
arXiv Detail & Related papers (2024-03-13T07:31:20Z) - Admission Prediction in Undergraduate Applications: an Interpretable
Deep Learning Approach [0.6906005491572401]
This article addresses the challenge of validating the admission committee's decisions for undergraduate admissions.
We propose deep learning-based classifiers, namely Feed-Forward and Input Convex neural networks.
Our models achieve higher accuracy compared to the best-performing traditional machine learning-based approach by a considerable margin of 3.03%.
arXiv Detail & Related papers (2024-01-22T05:44:43Z) - From Static Benchmarks to Adaptive Testing: Psychometrics in AI Evaluation [60.14902811624433]
We discuss a paradigm shift from static evaluation methods to adaptive testing.
This involves estimating the characteristics and value of each test item in the benchmark and dynamically adjusting items in real-time.
We analyze the current approaches, advantages, and underlying reasons for adopting psychometrics in AI evaluation.
arXiv Detail & Related papers (2023-06-18T09:54:33Z) - AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models [122.63704560157909]
We introduce AGIEval, a novel benchmark designed to assess foundation model in the context of human-centric standardized exams.
We evaluate several state-of-the-art foundation models, including GPT-4, ChatGPT, and Text-Davinci-003.
GPT-4 surpasses average human performance on SAT, LSAT, and math competitions, attaining a 95% accuracy rate on the SAT Math test and a 92.5% accuracy on the English test of the Chinese national college entrance exam.
arXiv Detail & Related papers (2023-04-13T09:39:30Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Using a Binary Classification Model to Predict the Likelihood of
Enrolment to the Undergraduate Program of a Philippine University [0.0]
This study covered an analysis of various characteristics of freshmen applicants affecting their admission status in a Philippine university.
A predictive model was developed using Logistic Regression to evaluate the probability that an admitted student will pursue to enroll in the Institution or not.
arXiv Detail & Related papers (2020-10-26T06:58:03Z) - Intersectional Affirmative Action Policies for Top-k Candidates
Selection [3.4961413413444817]
We study the problem of selecting the top-k candidates from a pool of applicants, where each candidate is associated with a score indicating his/her aptitude.
We consider a situation in which some groups of candidates experience historical and present disadvantage that makes their chances of being accepted much lower than other groups.
We propose two algorithms to solve this problem, analyze them, and evaluate them experimentally using a dataset of university application scores and admissions to bachelor degrees in an OECD country.
arXiv Detail & Related papers (2020-07-29T12:27:18Z) - Towards Data-Driven Affirmative Action Policies under Uncertainty [3.9293125023197595]
We consider affirmative action policies that seek to increase the number of admitted applicants from underrepresented groups.
Since such a policy has to be announced before the start of the application period, there is uncertainty about the score distribution of the students applying to each program.
We explore the possibility of using a predictive model trained on historical data to help optimize the parameters of such policies.
arXiv Detail & Related papers (2020-07-02T15:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.