Automatic Reviewers Assignment to a Research Paper Based on Allied References and Publications Weight
- URL: http://arxiv.org/abs/2506.21331v1
- Date: Thu, 26 Jun 2025 14:44:06 GMT
- Title: Automatic Reviewers Assignment to a Research Paper Based on Allied References and Publications Weight
- Authors: Tamim Al Mahmud, B M Mainul Hossain, Dilshad Ara,
- Abstract summary: We propose and implement program that uses a new strategy to automatically select the best reviewers for a research paper.<n>First, we collect the references and count authors who have at least one paper in the references.<n>Next, we search for top researchers in the specific topic and count their h-index, i10-index, and citations for the first n authors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Everyday, a vast stream of research documents is submitted to conferences, anthologies, journals, newsletters, annual reports, daily papers, and various periodicals. Many such publications use independent external specialists to review submissions. This process is called peer review, and the reviewers are called referees. However, it is not always possible to pick the best referee for reviewing. Moreover, new research fields are emerging in every sector, and the number of research papers is increasing dramatically. To review all these papers, every journal assigns a small team of referees who may not be experts in all areas. For example, a research paper in communication technology should be reviewed by an expert from the same field. Thus, efficiently selecting the best reviewer or referee for a research paper is a big challenge. In this research, we propose and implement program that uses a new strategy to automatically select the best reviewers for a research paper. Every research paper contains references at the end, usually from the same area. First, we collect the references and count authors who have at least one paper in the references. Then, we automatically browse the web to extract research topic keywords. Next, we search for top researchers in the specific topic and count their h-index, i10-index, and citations for the first n authors. Afterward, we rank the top n authors based on a score and automatically browse their homepages to retrieve email addresses. We also check their co-authors and colleagues online and discard them from the list. The remaining top n authors, generally professors, are likely the best referees for reviewing the research paper.
Related papers
- PRISM: Fine-Grained Paper-to-Paper Retrieval with Multi-Aspect-Aware Query Optimization [61.783280234747394]
PRISM is a document-to-document retrieval method that introduces multiple, fine-grained representations for both the query and candidate papers.<n>We present SciFullBench, a novel benchmark in which the complete and segmented context of full papers for both queries and candidates is available.<n>Experiments show that PRISM improves performance by an average of 4.3% over existing retrieval baselines.
arXiv Detail & Related papers (2025-07-14T08:41:53Z) - Rs4rs: Semantically Find Recent Publications from Top Recommendation System-Related Venues [0.2812395851874055]
Rs4rs is a web application designed to perform semantic search on recent papers from top conferences and journals related to Recommender Systems.
Rs4rs addresses these issues by providing a user-friendly platform where researchers can input their topic of interest and receive a list of recent, relevant papers from top Recommender Systems venues.
arXiv Detail & Related papers (2024-09-09T12:53:06Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [55.33653554387953]
Pattern Analysis and Machine Intelligence (PAMI) has led to numerous literature reviews aimed at collecting and fragmented information.<n>This paper presents a thorough analysis of these literature reviews within the PAMI field.<n>We try to address three core research questions: (1) What are the prevalent structural and statistical characteristics of PAMI literature reviews; (2) What strategies can researchers employ to efficiently navigate the growing corpus of reviews; and (3) What are the advantages and limitations of AI-generated reviews compared to human-authored ones.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Tag-Aware Document Representation for Research Paper Recommendation [68.8204255655161]
We propose a hybrid approach that leverages deep semantic representation of research papers based on social tags assigned by users.
The proposed model is effective in recommending research papers even when the rating data is very sparse.
arXiv Detail & Related papers (2022-09-08T09:13:07Z) - Tell Me How to Survey: Literature Review Made Simple with Automatic
Reading Path Generation [16.07200776251764]
How to glean papers worth reading from the massive literature to do a quick survey or keep up with the latest advancement about a specific research topic has become a challenging task.
Existing academic search engines such as Google Scholar return relevant papers by individually calculating the relevance between each paper and query.
We introduce Reading Path Generation (RPG) which aims at automatically producing a path of papers to read for a given query.
arXiv Detail & Related papers (2021-10-12T20:58:46Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z) - Automatic generation of reviews of scientific papers [1.1999555634662633]
We present a method for the automatic generation of a review paper corresponding to a user-defined query.
The first part identifies key papers in the area by their bibliometric parameters, such as a graph of co-citations.
The second stage uses a BERT based architecture that we train on existing reviews for extractive summarization of these key papers.
arXiv Detail & Related papers (2020-10-08T17:47:07Z) - Understanding Peer Review of Software Engineering Papers [5.744593856232663]
We aim at understanding how reviewers, including those who have won awards for reviewing, perform their reviews of software engineering papers.
The most important features of papers that result in positive reviews are clear and supported validation, an interesting problem, and novelty.
Authors should make the contribution of the work very clear in their paper.
arXiv Detail & Related papers (2020-09-02T17:31:45Z) - From Standard Summarization to New Tasks and Beyond: Summarization with
Manifold Information [77.89755281215079]
Text summarization is the research area aiming at creating a short and condensed version of the original document.
In real-world applications, most of the data is not in a plain text format.
This paper focuses on the survey of these new summarization tasks and approaches in the real-world application.
arXiv Detail & Related papers (2020-05-10T14:59:36Z) - A Correspondence Analysis Framework for Author-Conference
Recommendations [2.1055643409860743]
We use Correspondence Analysis (CA) to derive appropriate relationships between the entities in question, such as conferences and papers.
Our models show promising results when compared with existing methods such as content-based filtering, collaborative filtering and hybrid filtering.
arXiv Detail & Related papers (2020-01-08T18:52:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.