Characterising authors on the extent of their paper acceptance: A case
study of the Journal of High Energy Physics
- URL: http://arxiv.org/abs/2006.06928v1
- Date: Fri, 12 Jun 2020 03:26:25 GMT
- Title: Characterising authors on the extent of their paper acceptance: A case
study of the Journal of High Energy Physics
- Authors: Rima Hazra and Aryan and Hardik Aggarwal and Matteo Marsili and
Animesh Mukherjee
- Abstract summary: We investigate the profile and peer review text of authors whose papers almost always get accepted at a venue.
Authors with high acceptance rate are likely to have a high number of citations, high $h$-index, higher number of collaborators etc.
- Score: 4.402336973466853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: New researchers are usually very curious about the recipe that could
accelerate the chances of their paper getting accepted in a reputed forum
(journal/conference). In search of such a recipe, we investigate the profile
and peer review text of authors whose papers almost always get accepted at a
venue (Journal of High Energy Physics in our current work). We find authors
with high acceptance rate are likely to have a high number of citations, high
$h$-index, higher number of collaborators etc. We notice that they receive
relatively lengthy and positive reviews for their papers. In addition, we also
construct three networks -- co-reviewer, co-citation and collaboration network
and study the network-centric features and intra- and inter-category edge
interactions. We find that the authors with high acceptance rate are more
`central' in these networks; the volume of intra- and inter-category
interactions are also drastically different for the authors with high
acceptance rate compared to the other authors. Finally, using the above set of
features, we train standard machine learning models (random forest, XGBoost)
and obtain very high class wise precision and recall. In a followup discussion
we also narrate how apart from the author characteristics, the peer-review
system might itself have a role in propelling the distinction among the
different categories which could lead to potential discrimination and
unfairness and calls for further investigation by the system admins.
Related papers
- RelevAI-Reviewer: A Benchmark on AI Reviewers for Survey Paper Relevance [0.8089605035945486]
We propose RelevAI-Reviewer, an automatic system that conceptualizes the task of survey paper review as a classification problem.
We introduce a novel dataset comprised of 25,164 instances. Each instance contains one prompt and four candidate papers, each varying in relevance to the prompt.
We develop a machine learning (ML) model capable of determining the relevance of each paper and identifying the most pertinent one.
arXiv Detail & Related papers (2024-06-13T06:42:32Z) - On the Detection of Reviewer-Author Collusion Rings From Paper Bidding [71.43634536456844]
Collusion rings pose a major threat to the peer-review systems of computer science conferences.
One approach to solve this problem would be to detect the colluding reviewers from their manipulated bids.
No research has yet established that detecting collusion rings is even possible.
arXiv Detail & Related papers (2024-02-12T18:12:09Z) - Chain-of-Factors Paper-Reviewer Matching [32.86512592730291]
We propose a unified model for paper-reviewer matching that jointly considers semantic, topic, and citation factors.
We demonstrate the effectiveness of our proposed Chain-of-Factors model in comparison with state-of-the-art paper-reviewer matching methods and scientific pre-trained language models.
arXiv Detail & Related papers (2023-10-23T01:29:18Z) - The Semantic Reader Project: Augmenting Scholarly Documents through
AI-Powered Interactive Reading Interfaces [54.2590226904332]
We describe the Semantic Reader Project, a effort across multiple institutions to explore automatic creation of dynamic reading interfaces for research papers.
Ten prototype interfaces have been developed and more than 300 participants and real-world users have shown improved reading experiences.
We structure this paper around challenges scholars and the public face when reading research papers.
arXiv Detail & Related papers (2023-03-25T02:47:09Z) - How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions? [87.00095008723181]
Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
arXiv Detail & Related papers (2022-11-22T15:59:30Z) - Cracking Double-Blind Review: Authorship Attribution with Deep Learning [43.483063713471935]
We propose a transformer-based, neural-network architecture to attribute an anonymous manuscript to an author.
We leverage all research papers publicly available on arXiv amounting to over 2 million manuscripts.
Our method achieves an unprecedented authorship attribution accuracy, where up to 73% of papers are attributed correctly.
arXiv Detail & Related papers (2022-11-14T15:50:24Z) - Tag-Aware Document Representation for Research Paper Recommendation [68.8204255655161]
We propose a hybrid approach that leverages deep semantic representation of research papers based on social tags assigned by users.
The proposed model is effective in recommending research papers even when the rating data is very sparse.
arXiv Detail & Related papers (2022-09-08T09:13:07Z) - LG4AV: Combining Language Models and Graph Neural Networks for Author
Verification [0.11421942894219898]
We present our novel approach LG4AV which combines language models and graph neural networks for authorship verification.
By directly feeding the available texts in a pre-trained transformer architecture, our model does not need any hand-crafted stylometric features.
Our model can benefit from relations between authors that are meaningful with respect to the verification process.
arXiv Detail & Related papers (2021-09-03T12:45:28Z) - Bridger: Toward Bursting Scientific Filter Bubbles and Boosting
Innovation via Novel Author Discovery [22.839876884227536]
Bridger is a system for facilitating discovery of scholars and their work.
We construct a faceted representation of authors using information extracted from their papers and inferred personas.
We develop an approach that locates commonalities and contrasts between scientists.
arXiv Detail & Related papers (2021-08-12T11:24:23Z) - Author Clustering and Topic Estimation for Short Texts [69.54017251622211]
We propose a novel model that expands on the Latent Dirichlet Allocation by modeling strong dependence among the words in the same document.
We also simultaneously cluster users, removing the need for post-hoc cluster estimation.
Our method performs as well as -- or better -- than traditional approaches to problems arising in short text.
arXiv Detail & Related papers (2021-06-15T20:55:55Z) - How to Train Your Agent to Read and Write [52.24605794920856]
Reading and writing research papers is one of the most privileged abilities that a qualified researcher should master.
It would be fascinating if we could train an intelligent agent to help people read and summarize papers, and perhaps even discover and exploit the potential knowledge clues to write novel papers.
We propose a Deep ReAder-Writer (DRAW) network, which consists of a textitReader that can extract knowledge graphs (KGs) from input paragraphs and discover potential knowledge, a graph-to-text textitWriter that generates a novel paragraph, and a textit
arXiv Detail & Related papers (2021-01-04T12:22:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.