"Why Should I Review This Paper?" Unifying Semantic, Topic, and Citation
Factors for Paper-Reviewer Matching
- URL: http://arxiv.org/abs/2310.14483v1
- Date: Mon, 23 Oct 2023 01:29:18 GMT
- Title: "Why Should I Review This Paper?" Unifying Semantic, Topic, and Citation
Factors for Paper-Reviewer Matching
- Authors: Yu Zhang, Yanzhen Shen, Xiusi Chen, Bowen Jin, Jiawei Han
- Abstract summary: We propose a unified model for paper-reviewer matching that jointly captures semantic, topic, and citation factors.
Experiments on four datasets consistently validate our proposed UniPR model in comparison with state-of-the-art paper-reviewer matching methods.
- Score: 31.658757187200603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As many academic conferences are overwhelmed by a rapidly increasing number
of paper submissions, automatically finding appropriate reviewers for each
submission becomes a more urgent need than ever. Various factors have been
considered by previous attempts on this task to measure the expertise relevance
between a paper and a reviewer, including whether the paper is semantically
close to, shares topics with, and cites previous papers of the reviewer.
However, the majority of previous studies take only one of these factors into
account, leading to an incomprehensive evaluation of paper-reviewer relevance.
To bridge this gap, in this paper, we propose a unified model for
paper-reviewer matching that jointly captures semantic, topic, and citation
factors. In the unified model, a contextualized language model backbone is
shared by all factors to learn common knowledge, while instruction tuning is
introduced to characterize the uniqueness of each factor by producing
factor-aware paper embeddings. Experiments on four datasets (one of which is
newly contributed by us) across different fields, including machine learning,
computer vision, information retrieval, and data mining, consistently validate
the effectiveness of our proposed UniPR model in comparison with
state-of-the-art paper-reviewer matching methods and scientific pre-trained
language models.
Related papers
- RelevAI-Reviewer: A Benchmark on AI Reviewers for Survey Paper Relevance [0.8089605035945486]
We propose RelevAI-Reviewer, an automatic system that conceptualizes the task of survey paper review as a classification problem.
We introduce a novel dataset comprised of 25,164 instances. Each instance contains one prompt and four candidate papers, each varying in relevance to the prompt.
We develop a machine learning (ML) model capable of determining the relevance of each paper and identifying the most pertinent one.
arXiv Detail & Related papers (2024-06-13T06:42:32Z) - Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions [62.0123588983514]
Large Language Models (LLMs) have demonstrated wide-ranging applications across various fields.
We reformulate the peer-review process as a multi-turn, long-context dialogue, incorporating distinct roles for authors, reviewers, and decision makers.
We construct a comprehensive dataset containing over 26,841 papers with 92,017 reviews collected from multiple sources.
arXiv Detail & Related papers (2024-06-09T08:24:17Z) - Explaining Relationships Among Research Papers [14.223038413516685]
We propose a feature-based, LLM-prompting approach to generate richer citation texts.
We find a strong correlation between human preference and integrative writing style, suggesting that humans prefer high-level, abstract citations.
arXiv Detail & Related papers (2024-02-20T23:38:39Z) - CausalCite: A Causal Formulation of Paper Citations [80.82622421055734]
CausalCite is a new way to measure the significance of a paper by assessing the causal impact of the paper on its follow-up papers.
It is based on a novel causal inference method, TextMatch, which adapts the traditional matching framework to high-dimensional text embeddings.
We demonstrate the effectiveness of CausalCite on various criteria, such as high correlation with paper impact as reported by scientific experts.
arXiv Detail & Related papers (2023-11-05T23:09:39Z) - Fusion of the Power from Citations: Enhance your Influence by Integrating Information from References [3.607567777043649]
This study aims to formulate the prediction problem to identify whether one paper can increase scholars' influence or not.
By applying the framework in this work, scholars can identify whether their papers can improve their influence in the future.
arXiv Detail & Related papers (2023-10-27T19:51:44Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Tag-Aware Document Representation for Research Paper Recommendation [68.8204255655161]
We propose a hybrid approach that leverages deep semantic representation of research papers based on social tags assigned by users.
The proposed model is effective in recommending research papers even when the rating data is very sparse.
arXiv Detail & Related papers (2022-09-08T09:13:07Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - What's New? Summarizing Contributions in Scientific Literature [85.95906677964815]
We introduce a new task of disentangled paper summarization, which seeks to generate separate summaries for the paper contributions and the context of the work.
We extend the S2ORC corpus of academic articles by adding disentangled "contribution" and "context" reference labels.
We propose a comprehensive automatic evaluation protocol which reports the relevance, novelty, and disentanglement of generated outputs.
arXiv Detail & Related papers (2020-11-06T02:23:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.