COMPARE: A Taxonomy and Dataset of Comparison Discussions in Peer
Reviews
- URL: http://arxiv.org/abs/2108.04366v1
- Date: Mon, 9 Aug 2021 21:24:28 GMT
- Title: COMPARE: A Taxonomy and Dataset of Comparison Discussions in Peer
Reviews
- Authors: Shruti Singh, Mayank Singh and Pawan Goyal
- Abstract summary: We present a dataset of comparison discussions in peer reviews of research papers in the domain of experimental deep learning.
We build a taxonomy of categories in comparison discussions and present a detailed annotation scheme to analyze this.
Overall, we annotate 117 reviews covering 1,800 sentences and report a maximum F1 Score of 0.49.
- Score: 9.838034994804124
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Comparing research papers is a conventional method to demonstrate progress in
experimental research. We present COMPARE, a taxonomy and a dataset of
comparison discussions in peer reviews of research papers in the domain of
experimental deep learning. From a thorough observation of a large set of
review sentences, we build a taxonomy of categories in comparison discussions
and present a detailed annotation scheme to analyze this. Overall, we annotate
117 reviews covering 1,800 sentences. We experiment with various methods to
identify comparison sentences in peer reviews and report a maximum F1 Score of
0.49. We also pretrain two language models specifically on ML, NLP, and CV
paper abstracts and reviews to learn informative representations of peer
reviews. The annotated dataset and the pretrained models are available at
https://github.com/shruti-singh/COMPARE .
Related papers
- GLIMPSE: Pragmatically Informative Multi-Document Summarization for Scholarly Reviews [25.291384842659397]
We introduce sys, a summarization method designed to offer a concise yet comprehensive overview of scholarly reviews.
Unlike traditional consensus-based methods, sys extracts both common and unique opinions from the reviews.
arXiv Detail & Related papers (2024-06-11T15:27:01Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - A Bibliographic View on Constrained Clustering [4.705291741591329]
This paper presents general trends of the constrained clustering area and its sub-topics.
We list available software and analyse the experimental sections of our reference collection.
Among the topics we reviewed, applications studies were most abundant recently.
arXiv Detail & Related papers (2022-09-22T16:11:47Z) - Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing and
Conference Experiment Design [76.40919326501512]
We consider the question: how should reviewers be divided between phases or conditions in order to maximize total assignment similarity?
We empirically show that across several datasets pertaining to real conference data, dividing reviewers between phases/conditions uniformly at random allows an assignment that is nearly as good as the oracle optimal assignment.
arXiv Detail & Related papers (2021-08-13T19:29:41Z) - ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive
Summarization with Argument Mining [61.82562838486632]
We crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads.
We benchmark state-of-the-art models on our datasets and analyze characteristics associated with the data.
arXiv Detail & Related papers (2021-06-01T22:17:13Z) - Hierarchical Bi-Directional Self-Attention Networks for Paper Review
Rating Recommendation [81.55533657694016]
We propose a Hierarchical bi-directional self-attention Network framework (HabNet) for paper review rating prediction and recommendation.
Specifically, we leverage the hierarchical structure of the paper reviews with three levels of encoders: sentence encoder (level one), intra-review encoder (level two) and inter-review encoder (level three)
We are able to identify useful predictors to make the final acceptance decision, as well as to help discover the inconsistency between numerical review ratings and text sentiment conveyed by reviewers.
arXiv Detail & Related papers (2020-11-02T08:07:50Z) - A Disentangled Adversarial Neural Topic Model for Separating Opinions
from Plots in User Reviews [35.802290746473524]
We propose a neural topic model combined with adversarial training to disentangle opinion topics from plot and neutral ones.
We conduct an experimental assessment introducing a new collection of movie and book reviews paired with their plots.
Showing an improved coherence and variety of topics, a consistent disentanglement rate, and sentiment classification performance superior to other supervised topic models.
arXiv Detail & Related papers (2020-10-22T02:15:13Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.