Exploring the Confounding Factors of Academic Career Success: An
Empirical Study with Deep Predictive Modeling
- URL: http://arxiv.org/abs/2211.10615v1
- Date: Sat, 19 Nov 2022 08:16:21 GMT
- Title: Exploring the Confounding Factors of Academic Career Success: An
Empirical Study with Deep Predictive Modeling
- Authors: Chenguang Du, Deqing Wang, Fuzhen Zhuang, Hengshu Zhu
- Abstract summary: We propose to explore the determinants of academic career success through an empirical and predictive modeling perspective.
We analyze the co-author network and find that potential scholars work closely with influential scholars early on and more closely as they grow.
We find that being a Fellow could not bring the improvements of citations and productivity growth.
- Score: 43.91066315776696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding determinants of success in academic careers is critically
important to both scholars and their employing organizations. While
considerable research efforts have been made in this direction, there is still
a lack of a quantitative approach to modeling the academic careers of scholars
due to the massive confounding factors. To this end, in this paper, we propose
to explore the determinants of academic career success through an empirical and
predictive modeling perspective, with a focus on two typical academic honors,
i.e., IEEE Fellow and ACM Fellow. We analyze the importance of different
factors quantitatively, and obtain some insightful findings. Specifically, we
analyze the co-author network and find that potential scholars work closely
with influential scholars early on and more closely as they grow. Then we
compare the academic performance of male and female Fellows. After comparison,
we find that to be elected, females need to put in more effort than males. In
addition, we also find that being a Fellow could not bring the improvements of
citations and productivity growth. We hope these derived factors and findings
can help scholars to improve their competitiveness and develop well in their
academic careers.
Related papers
- Good Idea or Not, Representation of LLM Could Tell [86.36317971482755]
We focus on idea assessment, which aims to leverage the knowledge of large language models to assess the merit of scientific ideas.
We release a benchmark dataset from nearly four thousand manuscript papers with full texts, meticulously designed to train and evaluate the performance of different approaches to this task.
Our findings suggest that the representations of large language models hold more potential in quantifying the value of ideas than their generative outputs.
arXiv Detail & Related papers (2024-09-07T02:07:22Z) - Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers [90.26363107905344]
Large language models (LLMs) have sparked optimism about their potential to accelerate scientific discovery.
No evaluations have shown that LLM systems can take the very first step of producing novel, expert-level ideas.
arXiv Detail & Related papers (2024-09-06T08:25:03Z) - Inclusivity in Large Language Models: Personality Traits and Gender Bias in Scientific Abstracts [49.97673761305336]
We evaluate three large language models (LLMs) for their alignment with human narrative styles and potential gender biases.
Our findings indicate that, while these models generally produce text closely resembling human authored content, variations in stylistic features suggest significant gender biases.
arXiv Detail & Related papers (2024-06-27T19:26:11Z) - ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models [56.08917291606421]
ResearchAgent is a large language model-powered research idea writing agent.
It generates problems, methods, and experiment designs while iteratively refining them based on scientific literature.
We experimentally validate our ResearchAgent on scientific publications across multiple disciplines.
arXiv Detail & Related papers (2024-04-11T13:36:29Z) - Advancing a Model of Students' Intentional Persistence in Machine
Learning and Artificial Intelligence [0.9217021281095907]
The persistence of diverse populations has been studied in engineering.
Short-term intentional persistence is associated with academic enrollment factors such as major and level of study.
Long-term intentional persistence is correlated with measures of professional role confidence.
arXiv Detail & Related papers (2023-10-30T19:57:40Z) - Women, artificial intelligence, and key positions in collaboration
networks: Towards a more equal scientific ecosystem [0.0]
This study investigates the effects of several driving factors on acquiring key positions in scientific collaboration networks through a gender lens.
It was found that, regardless of gender, scientific performance in terms of quantity and impact plays a crucial in possessing the "social researcher" in the network.
arXiv Detail & Related papers (2022-05-19T15:15:04Z) - Academic Support Network Reflects Doctoral Experience and Productivity [1.6317061277457]
Acknowledgements in dissertations reflect the student experience and provide an opportunity to thank the people who support them.
We conduct textual analysis of acknowledgments to build the "academic support network"
Our results indicate the importance of academic support networks by explaining how they differ and how they influence productivity.
arXiv Detail & Related papers (2022-03-07T14:25:44Z) - The Gene of Scientific Success [12.755041724671159]
This paper elaborates how to identify and evaluate causal factors to improve scientific impact.
Author-centered and article-centered factors have the highest relevancy to scholars' future success in the computer science area.
arXiv Detail & Related papers (2022-02-17T06:16:15Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Early Indicators of Scientific Impact: Predicting Citations with
Altmetrics [0.0]
We use altmetrics to predict the short-term and long-term citations that a scholarly publication could receive.
We build various classification and regression models and evaluate their performance, finding neural networks and ensemble models to perform best for these tasks.
arXiv Detail & Related papers (2020-12-25T16:25:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.