Characterizing the effect of retractions on publishing careers
- URL: http://arxiv.org/abs/2306.06710v3
- Date: Thu, 27 Feb 2025 08:04:51 GMT
- Title: Characterizing the effect of retractions on publishing careers
- Authors: Shahan Ali Memon, Kinga Makovi, Bedoor AlShebli,
- Abstract summary: Retracting academic papers may have far-reaching consequences for retracted authors and their careers.<n>Our findings suggest that retractions may impose a disproportionate impact on early-career authors.
- Score: 0.7988085110283119
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Retracting academic papers is a fundamental tool of quality control, but it may have far-reaching consequences for retracted authors and their careers. Previous studies have highlighted the adverse effects of retractions on citation counts and coauthors' citations; however, the broader impacts beyond these have not been fully explored. We address this gap leveraging Retraction Watch, the most extensive data set on retractions and link it to Microsoft Academic Graph and Altmetric. Retracted authors, particularly those with less experience, often leave scientific publishing in the aftermath of retraction, especially if their retractions attract widespread attention. However, retracted authors who remain active in publishing maintain and establish more collaborations compared to their similar non-retracted counterparts. Nevertheless, retracted authors generally retain less senior and less productive coauthors, but gain more impactful coauthors post-retraction. Our findings suggest that retractions may impose a disproportionate impact on early-career authors.
Related papers
- Understanding and Supporting Peer Review Using AI-reframed Positive Summary [18.686807993563168]
This study explored the impact of appending an automatically generated positive summary to the peer reviews of a writing task.
We found that adding an AI-reframed positive summary to otherwise harsh feedback increased authors' critique acceptance.
We discuss the implications of using AI in peer feedback, focusing on how it can influence critique acceptance and support research communities.
arXiv Detail & Related papers (2025-03-13T11:22:12Z) - Retracted Citations and Self-citations in Retracted Publications: A Comparative Study of Plagiarism and Fake Peer Review [0.7673339435080445]
We focused on two retraction categories: plagiarism and fake peer review.
The trend shows a steady average growth in plagiarism cases of 1.2 times, while the fake peer review exhibits a fluctuating pattern with an average growth of 5.5 times.
The total number of retracted citations for plagiarized papers is 1.8 times higher than that for fake peer review papers.
arXiv Detail & Related papers (2025-02-02T05:05:09Z) - Are We There Yet? Revealing the Risks of Utilizing Large Language Models in Scholarly Peer Review [66.73247554182376]
Large language models (LLMs) have led to their integration into peer review.
The unchecked adoption of LLMs poses significant risks to the integrity of the peer review system.
We show that manipulating 5% of the reviews could potentially cause 12% of the papers to lose their position in the top 30% rankings.
arXiv Detail & Related papers (2024-12-02T16:55:03Z) - Using Bibliometrics to Detect Unconventional Authorship Practices and Examine Their Impact on Global Research Metrics, 2019-2023 [0.0]
Between 2019 and 2023, sixteen universities increased their research output by over fifteen times the global average.
This study detected patterns suggesting a reliance on unconventional authorship practices, such as gift, honorary, and sold authorship, to inflate publication metrics.
The study underscores the need for reforms by universities, policymakers, funding agencies, ranking agencies, accreditation bodies, scholarly publishers, and researchers to maintain academic integrity and ensure the reliability of global ranking systems.
arXiv Detail & Related papers (2024-07-07T22:20:34Z) - ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models [56.08917291606421]
ResearchAgent is a large language model-powered research idea writing agent.
It generates problems, methods, and experiment designs while iteratively refining them based on scientific literature.
We experimentally validate our ResearchAgent on scientific publications across multiple disciplines.
arXiv Detail & Related papers (2024-04-11T13:36:29Z) - Understanding Fine-grained Distortions in Reports of Scientific Findings [46.96512578511154]
Distorted science communication harms individuals and society as it can lead to unhealthy behavior change and decrease trust in scientific institutions.
Given the rapidly increasing volume of science communication in recent years, a fine-grained understanding of how findings from scientific publications are reported to the general public is crucial.
arXiv Detail & Related papers (2024-02-19T19:00:01Z) - On the Detection of Reviewer-Author Collusion Rings From Paper Bidding [71.43634536456844]
Collusion rings pose a major threat to the peer-review systems of computer science conferences.
One approach to solve this problem would be to detect the colluding reviewers from their manipulated bids.
No research has yet established that detecting collusion rings is even possible.
arXiv Detail & Related papers (2024-02-12T18:12:09Z) - Fusion of the Power from Citations: Enhance your Influence by Integrating Information from References [3.607567777043649]
This study aims to formulate the prediction problem to identify whether one paper can increase scholars' influence or not.
By applying the framework in this work, scholars can identify whether their papers can improve their influence in the future.
arXiv Detail & Related papers (2023-10-27T19:51:44Z) - Estimating the Causal Effect of Early ArXiving on Paper Acceptance [56.538813945721685]
We estimate the effect of arXiving a paper before the reviewing period (early arXiving) on its acceptance to the conference.
Our results suggest that early arXiving may have a small effect on a paper's chances of acceptance.
arXiv Detail & Related papers (2023-06-24T07:45:38Z) - How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions? [87.00095008723181]
Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
arXiv Detail & Related papers (2022-11-22T15:59:30Z) - Geodesics, Non-linearities and the Archive of Novelty Search [69.6462706723023]
We show that a key effect of the archive is that it counterbalances the exploration biases that result from the use of inadequate behavior metrics.
Our observations seem to hint that attributing a more active role to the archive in sampling can be beneficial.
arXiv Detail & Related papers (2022-05-06T12:03:40Z) - Yes-Yes-Yes: Donation-based Peer Reviewing Data Collection for ACL
Rolling Review and Beyond [58.71736531356398]
We present an in-depth discussion of peer reviewing data, outline the ethical and legal desiderata for peer reviewing data collection, and propose the first continuous, donation-based data collection workflow.
We report on the ongoing implementation of this workflow at the ACL Rolling Review and deliver the first insights obtained with the newly collected data.
arXiv Detail & Related papers (2022-01-27T11:02:43Z) - Dynamics of Cross-Platform Attention to Retracted Papers [25.179837269945015]
Retracted papers circulate widely on social media, digital news and other websites before their official retraction.
We quantify the amount and type of attention 3,851 retracted papers received over time in different online platforms.
arXiv Detail & Related papers (2021-10-15T01:40:20Z) - A Measure of Research Taste [91.3755431537592]
We present a citation-based measure that rewards both productivity and taste.
The presented measure, CAP, balances the impact of publications and their quantity.
We analyze the characteristics of CAP for highly-cited researchers in biology, computer science, economics, and physics.
arXiv Detail & Related papers (2021-05-17T18:01:47Z) - Emergence of Structural Inequalities in Scientific Citation Networks [20.754274052686355]
We identify two types of structural inequalities in scientific citations.
First, female authors, who represent a minority of researchers, receive less recognition for their work relative to male authors.
Second, authors affiliated with top-ranked institutions, who are also a minority, receive substantially more recognition compared to other authors.
arXiv Detail & Related papers (2021-03-19T17:53:08Z) - Early Indicators of Scientific Impact: Predicting Citations with
Altmetrics [0.0]
We use altmetrics to predict the short-term and long-term citations that a scholarly publication could receive.
We build various classification and regression models and evaluate their performance, finding neural networks and ensemble models to perform best for these tasks.
arXiv Detail & Related papers (2020-12-25T16:25:07Z) - Fighting Copycat Agents in Behavioral Cloning from Observation Histories [85.404120663644]
Imitation learning trains policies to map from input observations to the actions that an expert would choose.
We propose an adversarial approach to learn a feature representation that removes excess information about the previous expert action nuisance correlate.
arXiv Detail & Related papers (2020-10-28T10:52:10Z) - ArXiving Before Submission Helps Everyone [38.09600429721343]
We analyze the pros and cons of arXiving papers.
We see no reasons why anyone but the authors should decide whether to arXiv or not.
arXiv Detail & Related papers (2020-10-11T22:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.