Admission Prediction in Undergraduate Applications: an Interpretable
Deep Learning Approach
- URL: http://arxiv.org/abs/2401.11698v1
- Date: Mon, 22 Jan 2024 05:44:43 GMT
- Title: Admission Prediction in Undergraduate Applications: an Interpretable
Deep Learning Approach
- Authors: Amisha Priyadarshini, Barbara Martinez-Neda, Sergio Gago-Masague
- Abstract summary: This article addresses the challenge of validating the admission committee's decisions for undergraduate admissions.
We propose deep learning-based classifiers, namely Feed-Forward and Input Convex neural networks.
Our models achieve higher accuracy compared to the best-performing traditional machine learning-based approach by a considerable margin of 3.03%.
- Score: 0.6906005491572401
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article addresses the challenge of validating the admission committee's
decisions for undergraduate admissions. In recent years, the traditional review
process has struggled to handle the overwhelmingly large amount of applicants'
data. Moreover, this traditional assessment often leads to human bias, which
might result in discrimination among applicants. Although classical machine
learning-based approaches exist that aim to verify the quantitative assessment
made by the application reviewers, these methods lack scalability and suffer
from performance issues when a large volume of data is in place. In this
context, we propose deep learning-based classifiers, namely Feed-Forward and
Input Convex neural networks, which overcome the challenges faced by the
existing methods. Furthermore, we give additional insights into our model by
incorporating an interpretability module, namely LIME. Our training and test
datasets comprise applicants' data with a wide range of variables and
information. Our models achieve higher accuracy compared to the best-performing
traditional machine learning-based approach by a considerable margin of 3.03\%.
Additionally, we show the sensitivity of different features and their relative
impacts on the overall admission decision using the LIME technique.
Related papers
- Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Learning Confidence Bounds for Classification with Imbalanced Data [42.690254618937196]
We propose a novel framework that leverages learning theory and concentration inequalities to overcome the shortcomings of traditional solutions.
Our method can effectively adapt to the varying degrees of imbalance across different classes, resulting in more robust and reliable classification outcomes.
arXiv Detail & Related papers (2024-07-16T16:02:27Z) - A Survey of Deep Long-Tail Classification Advancements [1.6233132273470656]
Many data distributions in the real world are hardly uniform. Instead, skewed and long-tailed distributions of various kinds are commonly observed.
This poses an interesting problem for machine learning, where most algorithms assume or work well with uniformly distributed data.
The problem is further exacerbated by current state-of-the-art deep learning models requiring large volumes of training data.
arXiv Detail & Related papers (2024-04-24T01:59:02Z) - Machine Unlearning for Traditional Models and Large Language Models: A Short Survey [11.539080008361662]
Machine unlearning aims to delete data and reduce its impact on models according to user requests.
This paper categorizes and investigates unlearning on both traditional models and Large Language Models (LLMs)
arXiv Detail & Related papers (2024-04-01T16:08:18Z) - Explainable Attention for Few-shot Learning and Beyond [7.044125601403848]
We introduce a novel framework for achieving explainable hard attention finding, specifically tailored for few-shot learning scenarios.
Our approach employs deep reinforcement learning to implement the concept of hard attention, directly impacting raw input data.
arXiv Detail & Related papers (2023-10-11T18:33:17Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Beyond traditional assumptions in fair machine learning [5.029280887073969]
This thesis scrutinizes common assumptions underlying traditional machine learning approaches to fairness in consequential decision making.
We show that group fairness criteria purely based on statistical properties of observed data are fundamentally limited.
We overcome the assumption that sensitive data is readily available in practice.
arXiv Detail & Related papers (2021-01-29T09:02:15Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.