The Stagnant Persistence Paradox: Survival Analysis and Temporal Efficiency in Exact Sciences and Engineering Education
- URL: http://arxiv.org/abs/2512.04828v1
- Date: Thu, 04 Dec 2025 14:11:28 GMT
- Title: The Stagnant Persistence Paradox: Survival Analysis and Temporal Efficiency in Exact Sciences and Engineering Education
- Authors: H. R. Paz,
- Abstract summary: This study applies a dual-outcome survival analysis framework to two key outcomes: definitive dropout and first major switch.<n>Results uncover a critical systemic inefficiency: a global median survival time of 4.33 years prior to definitive dropout, with a pronounced long tail of extended enrolment.<n>We argue that academic failure in rigid engineering curricula is not a sudden outcome but a long-tail process that generates high opportunity costs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Research on student progression in higher education has traditionally focused on vertical outcomes such as persistence and dropout, often reducing complex academic histories to binary indicators. While the structural component of horizontal mobility (major switching, plan changes, re-entries) has recently been recognised as a core feature of contemporary university systems, the temporal cost and efficiency of these pathways remain largely unquantified. Using forty years of administrative records from a large faculty of engineering and exact sciences in Argentina (N = 24,016), this study applies a dual-outcome survival analysis framework to two key outcomes: definitive dropout and first major switch. We reconstruct academic trajectories as sequences of enrolment spells and typed transitions under the CAPIRE protocol, and then deploy non-parametric Kaplan-Meier estimators to model time-to-event under right-censoring. Results uncover a critical systemic inefficiency: a global median survival time of 4.33 years prior to definitive dropout, with a pronounced long tail of extended enrolment. This pattern reveals a phenomenon of stagnant persistence, where students remain formally enrolled for long periods without commensurate curricular progression. In contrast, major switching follows an early-event regime, with a median time of 1.0 year among switchers and most switches concentrated within the first academic year. We argue that academic failure in rigid engineering curricula is not a sudden outcome but a long-tail process that generates high opportunity costs, and that institutional indicators should shift from static retention metrics towards measures of curricular velocity based on time-to-event analysis.
Related papers
- The Causal Effect of First-Time Academic Failure on University Dropout: Evidence from a Regression Discontinuity Design [0.0]
This study estimates the causal effect of first-time academic failure on subsequent university attrition.<n> Contrary to conventional assumptions, the results indicate that marginal first-time failure is associated with a lower probability of subsequent dropout.
arXiv Detail & Related papers (2026-01-09T18:08:15Z) - Homeostasis Under Technological Transition: How High-Friction Universities Adapt Through Early Filtering Rather Than Reconfiguration [0.0]
We show that the translation of technological transitions into enrolment composition occurs with substantial delay.<n>We situate these patterns within nationally regulated constraints governing engineering education.<n>These findings suggest that apparent rigidity is not an anomaly but the predictable outcome of a system optimised for stability over responsiveness.
arXiv Detail & Related papers (2026-01-08T13:16:53Z) - Longitudinal Trends in Pre University Preparation. A Cohort Evaluation Using Introductory Mathematics and Physics Courses (1980-2019) [0.0]
This study presents a longitudinal evaluation of pre-university preparation based on early academic outcomes in Mathematics and Physics.<n>The study contributes to the international literature on educational evaluation by providing rare long-horizon longitudinal evidence from an Ibero-American context.
arXiv Detail & Related papers (2026-01-07T19:54:44Z) - Stabilising Learner Trajectories: A Doubly Robust Evaluation of AI-Guided Student Support using Activity Theory [1.2234742322758418]
This study evaluates an AI-guided student support system at a large university using doubly robust score matching.<n>Results indicate that the intervention effectively stabilised precarious trajectories.<n>However, effects on the speed of qualification completion were positive but statistically constrained.
arXiv Detail & Related papers (2025-12-11T22:28:12Z) - The Promotion Wall: Efficiency-Equity Trade-offs of Direct Promotion Regimes in Engineering Education [0.0]
Article uses a calibrated agent-based model to examine how alternative progression regimes reconfigure dropout, time-to-degree, equity and students' psychological experience.
arXiv Detail & Related papers (2025-11-21T12:04:31Z) - Emergence of Superposition: Unveiling the Training Dynamics of Chain of Continuous Thought [64.43689151961054]
We theoretically analyze the training dynamics of a simplified two-layer transformer on the directed graph reachability problem.<n>Our analysis reveals that during training using continuous thought, the index-matching logit will first increase and then remain bounded under mild assumptions.
arXiv Detail & Related papers (2025-09-27T15:23:46Z) - Persistence Paradox in Dynamic Science [4.641069902222306]
We focus on the deep learning revolution catalyzed by AlexNet in 2012.<n>Analyzing the 20-year career trajectories of over 5,000 scientists, we examine how their research focus and output evolved.
arXiv Detail & Related papers (2025-06-28T02:21:19Z) - Understanding Warmup-Stable-Decay Learning Rates: A River Valley Loss Landscape Perspective [66.80315289020487]
Warmup-Stable-Decay (WSD) schedule uses a constant learning rate to produce a main branch of iterates that can continue indefinitely without a pre-specified compute budget.<n>We show that pretraining loss exhibits a river valley landscape, which resembles a deep valley with a river at its bottom.<n>Inspired by the theory, we introduce WSD-S, a variant of WSD that reuses previous checkpoints' decay phases and keeps only one main branch.
arXiv Detail & Related papers (2024-10-07T16:49:39Z) - Normalization and effective learning rates in reinforcement learning [52.59508428613934]
Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature.
We show that normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate.
We propose to make the learning rate schedule explicit with a simple re- parameterization which we call Normalize-and-Project.
arXiv Detail & Related papers (2024-07-01T20:58:01Z) - Early Period of Training Impacts Adaptation for Out-of-Distribution Generalization: An Empirical Study [56.283944756315066]
We investigate the relationship between learning dynamics, out-of-distribution generalization and the early period of neural network training.<n>We show that changing the number of trainable parameters during the early period of training can significantly improve OOD results.<n>Our experiments on both image and text data show that the early period of training is a general phenomenon that can improve ID and OOD performance with minimal complexity.
arXiv Detail & Related papers (2024-03-22T13:52:53Z) - Temporal and Between-Group Variability in College Dropout Prediction [0.0]
This study provides a systematic evaluation of contributing factors and predictive performance of machine learning models.
We find dropout prediction at the end of the second year has a 20% higher AUC than at the time of enrollment in a Random Forest model.
Regarding variability across student groups, college GPA has more predictive value for students from traditionally disadvantaged backgrounds than their peers.
arXiv Detail & Related papers (2024-01-12T10:43:55Z) - Small-scale proxies for large-scale Transformer training instabilities [69.36381318171338]
We seek ways to reproduce and study training stability and instability at smaller scales.
By measuring the relationship between learning rate and loss across scales, we show that these instabilities also appear in small models when training at high learning rates.
We study methods such as warm-up, weight decay, and the $mu$Param to train small models that achieve similar losses across orders of magnitude of learning rate variation.
arXiv Detail & Related papers (2023-09-25T17:48:51Z) - Students Success Modeling: Most Important Factors [0.47829670123819784]
The model undertakes to identify students likely to graduate, the ones likely to transfer to a different school, and the ones likely to drop out and leave their higher education unfinished.
Our experiments demonstrate that distinguishing between to-be-graduate and at-risk students is reasonably achievable in the earliest stages.
The model remarkably foresees the fate of students who stay in the school for three years.
arXiv Detail & Related papers (2023-09-06T19:23:10Z) - Critical Learning Periods Emerge Even in Deep Linear Networks [102.89011295243334]
Critical learning periods are periods early in development where temporary sensory deficits can have a permanent effect on behavior and learned representations.
Despite the radical differences between biological and artificial networks, critical learning periods have been empirically observed in both systems.
arXiv Detail & Related papers (2023-08-23T16:01:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.