My part is bigger than yours -- assessment within a group of peers
- URL: http://arxiv.org/abs/2407.01843v2
- Date: Mon, 16 Sep 2024 14:53:54 GMT
- Title: My part is bigger than yours -- assessment within a group of peers
- Authors: Konrad KuĊakowski, Jacek Szybowski,
- Abstract summary: A project (e.g., writing a collaborative research paper) is often a group effort. At the end, each contributor identifies their contribution, often verbally.
It leads to the question of what (percentage) share in the creation of the paper is due to individual authors.
In this paper, we present simple models that allow aggregation of experts' views, linking the priority of his preference directly to the assessment made by other experts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A project (e.g., writing a collaborative research paper) is often a group effort. At the end, each contributor identifies their contribution, often verbally. The reward, however, is very frequently financial. It leads to the question of what (percentage) share in the creation of the paper is due to individual authors. Different authors may have various opinions on the matter; even worse, their opinions may have different relevance. In this paper, we present simple models that allow aggregation of experts' views, linking the priority of his preference directly to the assessment made by other experts. In this approach, the more significant the contribution of a given expert, the greater the importance of his opinion. The presented method can be considered an attempt to find consensus among peers involved in the same project. Hence, its applications may go beyond the proposed study example of writing a scientific paper.
Related papers
- Good Idea or Not, Representation of LLM Could Tell [86.36317971482755]
We focus on idea assessment, which aims to leverage the knowledge of large language models to assess the merit of scientific ideas.
We release a benchmark dataset from nearly four thousand manuscript papers with full texts, meticulously designed to train and evaluate the performance of different approaches to this task.
Our findings suggest that the representations of large language models hold more potential in quantifying the value of ideas than their generative outputs.
arXiv Detail & Related papers (2024-09-07T02:07:22Z) - What Can Natural Language Processing Do for Peer Review? [173.8912784451817]
In modern science, peer review is widely used, yet it is hard, time-consuming, and prone to error.
Since the artifacts involved in peer review are largely text-based, Natural Language Processing has great potential to improve reviewing.
We detail each step of the process from manuscript submission to camera-ready revision, and discuss the associated challenges and opportunities for NLP assistance.
arXiv Detail & Related papers (2024-05-10T16:06:43Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Time to Stop and Think: What kind of research do we want to do? [1.74048653626208]
In this paper, we focus on the field of metaheuristic optimization, since it is our main field of work.
Our main goal is to sew the seed of sincere critical assessment of our work, sparking a reflection process both at the individual and the community level.
All the statements included in this document are personal views and opinions, which can be shared by others or not.
arXiv Detail & Related papers (2024-02-13T08:53:57Z) - Fusion of the Power from Citations: Enhance your Influence by Integrating Information from References [3.607567777043649]
This study aims to formulate the prediction problem to identify whether one paper can increase scholars' influence or not.
By applying the framework in this work, scholars can identify whether their papers can improve their influence in the future.
arXiv Detail & Related papers (2023-10-27T19:51:44Z) - Scientific Opinion Summarization: Paper Meta-review Generation Dataset, Methods, and Evaluation [55.00687185394986]
We propose the task of scientific opinion summarization, where research paper reviews are synthesized into meta-reviews.
We introduce the ORSUM dataset covering 15,062 paper meta-reviews and 57,536 paper reviews from 47 conferences.
Our experiments show that (1) human-written summaries do not always satisfy all necessary criteria such as depth of discussion, and identifying consensus and controversy for the specific domain, and (2) the combination of task decomposition and iterative self-refinement shows strong potential for enhancing the opinions.
arXiv Detail & Related papers (2023-05-24T02:33:35Z) - How do Authors' Perceptions of their Papers Compare with Co-authors'
Perceptions and Peer-review Decisions? [87.00095008723181]
Authors have roughly a three-fold overestimate of the acceptance probability of their papers.
Female authors exhibit a marginally higher (statistically significant) miscalibration than male authors.
At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process.
arXiv Detail & Related papers (2022-11-22T15:59:30Z) - What Factors Should Paper-Reviewer Assignments Rely On? Community
Perspectives on Issues and Ideals in Conference Peer-Review [20.8704278772718]
We present the results of the first survey of the NLP community.
We identify common issues and perspectives on what factors should be considered by paper-reviewer matching systems.
arXiv Detail & Related papers (2022-05-02T16:07:02Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Characterising authors on the extent of their paper acceptance: A case
study of the Journal of High Energy Physics [4.402336973466853]
We investigate the profile and peer review text of authors whose papers almost always get accepted at a venue.
Authors with high acceptance rate are likely to have a high number of citations, high $h$-index, higher number of collaborators etc.
arXiv Detail & Related papers (2020-06-12T03:26:25Z) - Aspect-based Sentiment Analysis of Scientific Reviews [12.472629584751509]
We show that the distribution of aspect-based sentiments obtained from a review is significantly different for accepted and rejected papers.
As a second objective, we quantify the extent of disagreement among the reviewers refereeing a paper.
We also investigate the extent of disagreement between the reviewers and the chair and find that the inter-reviewer disagreement may have a link to the disagreement with the chair.
arXiv Detail & Related papers (2020-06-05T07:06:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.