Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review
- URL: http://arxiv.org/abs/2204.10805v1
- Date: Fri, 22 Apr 2022 16:39:38 GMT
- Title: Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review
- Authors: Ilia Kuznetsov, Jan Buchmann, Max Eichler, Iryna Gurevych
- Abstract summary: Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
- Score: 52.359007622096684
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Peer review is a key component of the publishing process in most fields of
science. The increasing submission rates put a strain on reviewing quality and
efficiency, motivating the development of applications to support the reviewing
and editorial work. While existing NLP studies focus on the analysis of
individual texts, editorial assistance often requires modeling interactions
between pairs of texts -- yet general frameworks and datasets to support this
scenario are missing. Relationships between texts are the core object of the
intertextuality theory -- a family of approaches in literary studies not yet
operationalized in NLP. Inspired by prior theoretical work, we propose the
first intertextual model of text-based collaboration, which encompasses three
major phenomena that make up a full iteration of the review-revise-and-resubmit
cycle: pragmatic tagging, linking and long-document version alignment. While
peer review is used across the fields of science and publication formats,
existing datasets solely focus on conference-style review in computer science.
Addressing this, we instantiate our proposed model in the first annotated
multi-domain corpus in journal-style post-publication open peer review, and
provide detailed insights into the practical aspects of intertextual
annotation. Our resource is a major step towards multi-domain, fine-grained
applications of NLP in editorial support for peer review, and our intertextual
framework paves the path for general-purpose modeling of text-based
collaboration.
Related papers
- Re3: A Holistic Framework and Dataset for Modeling Collaborative Document Revision [62.12545440385489]
We introduce Re3, a framework for joint analysis of collaborative document revision.
We present Re3-Sci, a large corpus of aligned scientific paper revisions manually labeled according to their action and intent.
We use the new data to provide first empirical insights into collaborative document revision in the academic domain.
arXiv Detail & Related papers (2024-05-31T21:19:09Z) - Chain-of-Factors Paper-Reviewer Matching [32.86512592730291]
We propose a unified model for paper-reviewer matching that jointly considers semantic, topic, and citation factors.
We demonstrate the effectiveness of our proposed Chain-of-Factors model in comparison with state-of-the-art paper-reviewer matching methods and scientific pre-trained language models.
arXiv Detail & Related papers (2023-10-23T01:29:18Z) - NLPeer: A Unified Resource for the Computational Study of Peer Review [58.71736531356398]
We introduce NLPeer -- the first ethically sourced multidomain corpus of more than 5k papers and 11k review reports from five different venues.
We augment previous peer review datasets to include parsed and structured paper representations, rich metadata and versioning information.
Our work paves the path towards systematic, multi-faceted, evidence-based study of peer review in NLP and beyond.
arXiv Detail & Related papers (2022-11-12T12:29:38Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Target-aware Abstractive Related Work Generation with Contrastive
Learning [48.02845973891943]
The related work section is an important component of a scientific paper, which highlights the contribution of the target paper in the context of the reference papers.
Most of the existing related work section generation methods rely on extracting off-the-shelf sentences.
We propose an abstractive target-aware related work generator (TAG), which can generate related work sections consisting of new sentences.
arXiv Detail & Related papers (2022-05-26T13:20:51Z) - What's New? Summarizing Contributions in Scientific Literature [85.95906677964815]
We introduce a new task of disentangled paper summarization, which seeks to generate separate summaries for the paper contributions and the context of the work.
We extend the S2ORC corpus of academic articles by adding disentangled "contribution" and "context" reference labels.
We propose a comprehensive automatic evaluation protocol which reports the relevance, novelty, and disentanglement of generated outputs.
arXiv Detail & Related papers (2020-11-06T02:23:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.