Persistence Paradox in Dynamic Science
- URL: http://arxiv.org/abs/2506.22729v2
- Date: Tue, 01 Jul 2025 16:14:58 GMT
- Title: Persistence Paradox in Dynamic Science
- Authors: Honglin Bao, Kai Li,
- Abstract summary: We focus on the deep learning revolution catalyzed by AlexNet in 2012.<n>Analyzing the 20-year career trajectories of over 5,000 scientists, we examine how their research focus and output evolved.
- Score: 4.641069902222306
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Persistence is often regarded as a virtue in science. In this paper, however, we challenge this conventional view by highlighting its contextual nature, particularly how persistence can become a liability during periods of paradigm shift. We focus on the deep learning revolution catalyzed by AlexNet in 2012. Analyzing the 20-year career trajectories of over 5,000 scientists who were active in top machine learning venues during the preceding decade, we examine how their research focus and output evolved. We first uncover a dynamic period in which leading venues increasingly prioritized cutting-edge deep learning developments that displaced relatively traditional statistical learning methods. Scientists responded to these changes in markedly different ways. Those who were previously successful or affiliated with old teams adapted more slowly, experiencing what we term a rigidity penalty - a reluctance to embrace new directions leading to a decline in scientific impact, as measured by citation percentile rank. In contrast, scientists who pursued strategic adaptation - selectively pivoting toward emerging trends while preserving weak connections to prior expertise - reaped the greatest benefits. Taken together, our macro- and micro-level findings show that scientific breakthroughs act as mechanisms that reconfigure power structures within a field.
Related papers
- The Stagnant Persistence Paradox: Survival Analysis and Temporal Efficiency in Exact Sciences and Engineering Education [0.0]
This study applies a dual-outcome survival analysis framework to two key outcomes: definitive dropout and first major switch.<n>Results uncover a critical systemic inefficiency: a global median survival time of 4.33 years prior to definitive dropout, with a pronounced long tail of extended enrolment.<n>We argue that academic failure in rigid engineering curricula is not a sudden outcome but a long-tail process that generates high opportunity costs.
arXiv Detail & Related papers (2025-12-04T14:11:28Z) - Autonomous Agents for Scientific Discovery: Orchestrating Scientists, Language, Code, and Physics [82.55776608452017]
Large language models (LLMs) provide a flexible and versatile framework that orchestrates interactions with human scientists, natural language, computer language and code, and physics.<n>This paper presents our view and vision of LLM-based scientific agents and their growing role in transforming the scientific discovery lifecycle.<n>We identify open research challenges and outline promising directions for building more robust, generalizable, and adaptive scientific agents.
arXiv Detail & Related papers (2025-10-10T22:26:26Z) - ScienceMeter: Tracking Scientific Knowledge Updates in Language Models [79.33626657942169]
Large Language Models (LLMs) are increasingly used to support scientific research, but their knowledge of scientific advancements can quickly become outdated.<n>We introduce ScienceMeter, a new framework for evaluating scientific knowledge update methods over scientific knowledge spanning the past, present, and future.
arXiv Detail & Related papers (2025-05-30T07:28:20Z) - From Automation to Autonomy: A Survey on Large Language Models in Scientific Discovery [67.07598263346591]
Large Language Models (LLMs) are catalyzing a paradigm shift in scientific discovery.<n>This survey systematically charts this burgeoning field, placing a central focus on the changing roles and escalating capabilities of LLMs in science.
arXiv Detail & Related papers (2025-05-19T15:41:32Z) - Scaling Laws in Scientific Discovery with AI and Robot Scientists [72.3420699173245]
An autonomous generalist scientist (AGS) concept combines agentic AI and embodied robotics to automate the entire research lifecycle.<n>AGS aims to significantly reduce the time and resources needed for scientific discovery.<n>As these autonomous systems become increasingly integrated into the research process, we hypothesize that scientific discovery might adhere to new scaling laws.
arXiv Detail & Related papers (2025-03-28T14:00:27Z) - Adversarial Alignment for LLMs Requires Simpler, Reproducible, and More Measurable Objectives [52.863024096759816]
Misaligned research objectives have hindered progress in adversarial robustness research over the past decade.<n>We argue that realigned objectives are necessary for meaningful progress in adversarial alignment.
arXiv Detail & Related papers (2025-02-17T15:28:40Z) - Context-Aware Reasoning On Parametric Knowledge for Inferring Causal Variables [49.31233968546582]
We introduce a novel benchmark where the objective is to complete a partial causal graph.<n>We show the strong ability of LLMs to hypothesize the backdoor variables between a cause and its effect.<n>Unlike simple memorization of fixed associations, our task requires the LLM to reason according to the context of the entire graph.
arXiv Detail & Related papers (2024-09-04T10:37:44Z) - Contrastive Learning with Adaptive Neighborhoods for Brain Age Prediction on 3D Stiffness Maps [8.14243193774551]
We introduce a novel contrastive loss that adapts dynamically during the training process, focusing on the localized neighborhoods of samples.
This work presents the first application of self-supervised learning to brain mechanical properties, using compiled stiffness maps to predict brain age.
arXiv Detail & Related papers (2024-08-01T12:58:19Z) - The Empirical Impact of Forgetting and Transfer in Continual Visual Odometry [4.704582238028159]
We investigate the impact of catastrophic forgetting and the effectiveness of knowledge transfer in neural networks trained continuously in an embodied setting.
We observe initial satisfactory performance with high transferability between environments, followed by a specialization phase.
These findings emphasize the open challenges of balancing adaptation and memory retention in lifelong robotics.
arXiv Detail & Related papers (2024-06-03T21:32:50Z) - From Protoscience to Epistemic Monoculture: How Benchmarking Set the Stage for the Deep Learning Revolution [0.0]
Our three-part history of AI research traces the creation of this "epistemic monoculture" back to a radical reconceptualization of scientific progress.
We explain how AI's monoculture offers a challenge to the belief that basic, exploration-driven research is needed for scientific progress.
arXiv Detail & Related papers (2024-04-09T22:55:06Z) - Artificial Intelligence for Science in Quantum, Atomistic, and Continuum Systems [268.585904751315]
New area of research known as AI for science (AI4Science)<n>Areas aim at understanding the physical world from subatomic (wavefunctions and electron density), atomic (molecules, proteins, materials, and interactions), to macro (fluids, climate, and subsurface) scales.<n>Key common challenge is how to capture physics first principles, especially symmetries, in natural systems by deep learning methods.
arXiv Detail & Related papers (2023-07-17T12:14:14Z) - A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and
Why? [84.46288849132634]
We propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques.
We define three variables to encompass diverse facets of the evolution of research topics within NLP.
We utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data.
arXiv Detail & Related papers (2023-05-22T11:08:00Z) - Loss of Plasticity in Continual Deep Reinforcement Learning [14.475963928766134]
We demonstrate that deep RL agents lose their ability to learn good policies when they cycle through a sequence of Atari 2600 games.
We investigate this phenomenon closely at scale and analyze how the weights, gradients, and activations change over time.
Our analysis shows that the activation footprint of the network becomes sparser, contributing to the diminishing gradients.
arXiv Detail & Related papers (2023-03-13T22:37:15Z) - The Introspective Agent: Interdependence of Strategy, Physiology, and
Sensing for Embodied Agents [51.94554095091305]
We argue for an introspective agent, which considers its own abilities in the context of its environment.
Just as in nature, we hope to reframe strategy as one tool, among many, to succeed in an environment.
arXiv Detail & Related papers (2022-01-02T20:14:01Z) - Attention: to Better Stand on the Shoulders of Giants [34.5017808610466]
This paper develops an attention mechanism for the long-term scientific impact prediction.
It validates the method based on a real large-scale citation data set.
arXiv Detail & Related papers (2020-05-27T00:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.