Forecasting Faculty Placement from Patterns in Co-authorship Networks
- URL: http://arxiv.org/abs/2507.14696v1
- Date: Sat, 19 Jul 2025 17:09:23 GMT
- Title: Forecasting Faculty Placement from Patterns in Co-authorship Networks
- Authors: Samantha Dies, David Liu, Tina Eliassi-Rad,
- Abstract summary: We consider faculty placement as an individual-level prediction task.<n>We use temporal co-authorship networks with conventional attributes such as doctoral department prestige and bibliometric features.<n>Our results underscore the role that social networks, professional endorsements, and implicit advocacy play in faculty hiring beyond traditional measures of scholarly productivity and institutional prestige.
- Score: 3.0565132187715007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Faculty hiring shapes the flow of ideas, resources, and opportunities in academia, influencing not only individual career trajectories but also broader patterns of institutional prestige and scientific progress. While traditional studies have found strong correlations between faculty hiring and attributes such as doctoral department prestige and publication record, they rarely assess whether these associations generalize to individual hiring outcomes, particularly for future candidates outside the original sample. Here, we consider faculty placement as an individual-level prediction task. Our data consist of temporal co-authorship networks with conventional attributes such as doctoral department prestige and bibliometric features. We observe that using the co-authorship network significantly improves predictive accuracy by up to 10% over traditional indicators alone, with the largest gains observed for placements at the most elite (top-10) departments. Our results underscore the role that social networks, professional endorsements, and implicit advocacy play in faculty hiring beyond traditional measures of scholarly productivity and institutional prestige. By introducing a predictive framing of faculty placement and establishing the benefit of considering co-authorship networks, this work provides a new lens for understanding structural biases in academia that could inform targeted interventions aimed at increasing transparency, fairness, and equity in academic hiring practices.
Related papers
- Edge interventions can mitigate demographic and prestige disparities in the Computer Science coauthorship network [1.606071974243323]
We investigate inequities in network centrality in a hand-collected data set of 5,670 U.S.-based faculty employed in Ph.D.-granting Computer Science departments.<n>We find that women and individuals with minoritized race identities are less central in the computer science coauthorship network.
arXiv Detail & Related papers (2025-06-04T20:36:24Z) - Employee Turnover Prediction: A Cross-component Attention Transformer with Consideration of Competitor Influence and Contagious Effect [12.879229546467117]
We propose a novel deep learning approach based on job embeddedness theory to predict the turnovers of individual employees across different firms.<n>Our developed method demonstrates superior performance over several state-of-the-art benchmark methods.
arXiv Detail & Related papers (2025-01-31T22:25:39Z) - Co-Supervised Learning: Improving Weak-to-Strong Generalization with
Hierarchical Mixture of Experts [81.37287967870589]
We propose to harness a diverse set of specialized teachers, instead of a single generalist one, that collectively supervises the strong student.
Our approach resembles the classical hierarchical mixture of experts, with two components tailored for co-supervision.
We validate the proposed method through visual recognition tasks on the OpenAI weak-to-strong benchmark and additional multi-domain datasets.
arXiv Detail & Related papers (2024-02-23T18:56:11Z) - A Content-Based Novelty Measure for Scholarly Publications: A Proof of
Concept [9.148691357200216]
We introduce an information-theoretic measure of novelty in scholarly publications.
This measure quantifies the degree of'surprise' perceived by a language model that represents the word distribution of scholarly discourse.
arXiv Detail & Related papers (2024-01-08T03:14:24Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Exploring the Confounding Factors of Academic Career Success: An
Empirical Study with Deep Predictive Modeling [43.91066315776696]
We propose to explore the determinants of academic career success through an empirical and predictive modeling perspective.
We analyze the co-author network and find that potential scholars work closely with influential scholars early on and more closely as they grow.
We find that being a Fellow could not bring the improvements of citations and productivity growth.
arXiv Detail & Related papers (2022-11-19T08:16:21Z) - Self-supervised debiasing using low rank regularization [59.84695042540525]
Spurious correlations can cause strong biases in deep neural networks, impairing generalization ability.
We propose a self-supervised debiasing framework potentially compatible with unlabeled samples.
Remarkably, the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines.
arXiv Detail & Related papers (2022-10-11T08:26:19Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - The Dynamics of Faculty Hiring Networks [1.6114012813668934]
We study a family of adaptive rewiring network models, which reinforce institutional prestige in five ways.
We find that structural inequalities and centrality patterns in real hiring networks are best reproduced by a mechanism of global placement power.
On the other hand, network measures of biased visibility are better recapitulated by a mechanism of local placement power.
arXiv Detail & Related papers (2021-05-06T21:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.