Argument Mining Driven Analysis of Peer-Reviews
- URL: http://arxiv.org/abs/2012.07743v1
- Date: Thu, 10 Dec 2020 16:06:21 GMT
- Title: Argument Mining Driven Analysis of Peer-Reviews
- Authors: Michael Fromm, Evgeniy Faerman, Max Berrendorf, Siddharth Bhargava,
Ruoxia Qi, Yao Zhang, Lukas Dennert, Sophia Selle, Yang Mao, Thomas Seidl
- Abstract summary: We propose an Argument Mining based approach for the assistance of editors, meta-reviewers, and reviewers.
One of our findings is that arguments used in the peer-review process differ from arguments in other domains making the transfer of pre-trained models difficult.
We provide the community with a new peer-review dataset from different computer science conferences with annotated arguments.
- Score: 4.552676857046446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Peer reviewing is a central process in modern research and essential for
ensuring high quality and reliability of published work. At the same time, it
is a time-consuming process and increasing interest in emerging fields often
results in a high review workload, especially for senior researchers in this
area. How to cope with this problem is an open question and it is vividly
discussed across all major conferences. In this work, we propose an Argument
Mining based approach for the assistance of editors, meta-reviewers, and
reviewers. We demonstrate that the decision process in the field of scientific
publications is driven by arguments and automatic argument identification is
helpful in various use-cases. One of our findings is that arguments used in the
peer-review process differ from arguments in other domains making the transfer
of pre-trained models difficult. Therefore, we provide the community with a new
peer-review dataset from different computer science conferences with annotated
arguments. In our extensive empirical evaluation, we show that Argument Mining
can be used to efficiently extract the most relevant parts from reviews, which
are paramount for the publication decision. The process remains interpretable
since the extracted arguments can be highlighted in a review without detaching
them from their context.
Related papers
- GLIMPSE: Pragmatically Informative Multi-Document Summarization for Scholarly Reviews [25.291384842659397]
We introduce sys, a summarization method designed to offer a concise yet comprehensive overview of scholarly reviews.
Unlike traditional consensus-based methods, sys extracts both common and unique opinions from the reviews.
arXiv Detail & Related papers (2024-06-11T15:27:01Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Scientific Opinion Summarization: Paper Meta-review Generation Dataset, Methods, and Evaluation [55.00687185394986]
We propose the task of scientific opinion summarization, where research paper reviews are synthesized into meta-reviews.
We introduce the ORSUM dataset covering 15,062 paper meta-reviews and 57,536 paper reviews from 47 conferences.
Our experiments show that (1) human-written summaries do not always satisfy all necessary criteria such as depth of discussion, and identifying consensus and controversy for the specific domain, and (2) the combination of task decomposition and iterative self-refinement shows strong potential for enhancing the opinions.
arXiv Detail & Related papers (2023-05-24T02:33:35Z) - Submission-Aware Reviewer Profiling for Reviewer Recommender System [26.382772998002523]
We propose an approach that learns from each abstract published by a potential reviewer the topics studied and the explicit context in which the reviewer studied the topics.
Our experiments show a significant, consistent improvement in precision when compared with the existing methods.
The new approach has been deployed successfully at top-tier conferences in the last two years.
arXiv Detail & Related papers (2022-11-08T12:18:02Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive
Summarization with Argument Mining [61.82562838486632]
We crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads.
We benchmark state-of-the-art models on our datasets and analyze characteristics associated with the data.
arXiv Detail & Related papers (2021-06-01T22:17:13Z) - A Large Scale Randomized Controlled Trial on Herding in Peer-Review
Discussions [33.261698377782075]
We aim to understand whether reviewers and more senior decision makers get disproportionately influenced by the first argument presented in a discussion.
Specifically, we design and execute a randomized controlled trial with the goal of testing for the conditional causal effect of the discussion initiator's opinion on the outcome of a paper.
arXiv Detail & Related papers (2020-11-30T18:23:07Z) - Quantitative Argument Summarization and Beyond: Cross-Domain Key Point
Analysis [17.875273745811775]
We develop a method for automatic extraction of key points, which enables fully automatic analysis.
We demonstrate that the applicability of key point analysis goes well beyond argumentation data.
An additional contribution is an in-depth evaluation of argument-to-key point matching models.
arXiv Detail & Related papers (2020-10-11T23:01:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.