Generating Summaries for Scientific Paper Review
- URL: http://arxiv.org/abs/2109.14059v1
- Date: Tue, 28 Sep 2021 21:43:53 GMT
- Title: Generating Summaries for Scientific Paper Review
- Authors: Ana Sabina Uban, Cornelia Caragea
- Abstract summary: The increase of submissions for top venues in machine learning and NLP has caused a problem of excessive burden on reviewers.
An automatic system for assisting with the reviewing process could be a solution for ameliorating the problem.
In this paper, we explore automatic review summary generation for scientific papers.
- Score: 29.12631698162247
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The review process is essential to ensure the quality of publications.
Recently, the increase of submissions for top venues in machine learning and
NLP has caused a problem of excessive burden on reviewers and has often caused
concerns regarding how this may not only overload reviewers, but also may
affect the quality of the reviews. An automatic system for assisting with the
reviewing process could be a solution for ameliorating the problem. In this
paper, we explore automatic review summary generation for scientific papers. We
posit that neural language models have the potential to be valuable candidates
for this task. In order to test this hypothesis, we release a new dataset of
scientific papers and their reviews, collected from papers published in the
NeurIPS conference from 2013 to 2020. We evaluate state of the art neural
summarization models, present initial results on the feasibility of automatic
review summary generation, and propose directions for the future.
Related papers
- Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - RelevAI-Reviewer: A Benchmark on AI Reviewers for Survey Paper Relevance [0.8089605035945486]
We propose RelevAI-Reviewer, an automatic system that conceptualizes the task of survey paper review as a classification problem.
We introduce a novel dataset comprised of 25,164 instances. Each instance contains one prompt and four candidate papers, each varying in relevance to the prompt.
We develop a machine learning (ML) model capable of determining the relevance of each paper and identifying the most pertinent one.
arXiv Detail & Related papers (2024-06-13T06:42:32Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Automatic Analysis of Substantiation in Scientific Peer Reviews [24.422667012858298]
SubstanReview consists of 550 reviews from NLP conferences annotated by domain experts.
On the basis of this dataset, we train an argument mining system to automatically analyze the level of substantiation in peer reviews.
We also perform data analysis on the SubstanReview dataset to obtain meaningful insights on peer reviewing quality in NLP conferences over recent years.
arXiv Detail & Related papers (2023-11-20T17:47:37Z) - No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment
using Adversarial Learning [25.70062566419791]
We show that this automation can be manipulated using adversarial learning.
We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers.
arXiv Detail & Related papers (2023-03-25T11:34:27Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Can We Automate Scientific Reviewing? [89.50052670307434]
We discuss the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers.
We collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews.
Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews.
arXiv Detail & Related papers (2021-01-30T07:16:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.