Do open citations give insights on the qualitative peer-review
evaluation in research assessments? An analysis of the Italian National
Scientific Qualification
- URL: http://arxiv.org/abs/2103.07942v2
- Date: Sun, 23 Oct 2022 07:56:38 GMT
- Title: Do open citations give insights on the qualitative peer-review
evaluation in research assessments? An analysis of the Italian National
Scientific Qualification
- Authors: Federica Bologna, Angelo Di Iorio, Silvio Peroni, Francesco Poggi
- Abstract summary: The Italian National Scientific Qualification (NSQ) aims at deciding whether a scholar can apply to professorial academic positions.
It makes use of bibliometrics followed by a peer-review process of candidates' CVs.
We explore whether citation-based metrics, calculated only considering open and citation data, can support the human peer-review of NDs.
- Score: 1.911678487931003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past, several works have investigated ways for combining quantitative
and qualitative methods in research assessment exercises. Indeed, the Italian
National Scientific Qualification (NSQ), i.e. the national assessment exercise
which aims at deciding whether a scholar can apply to professorial academic
positions as Associate Professor and Full Professor, adopts a quantitative and
qualitative evaluation process: it makes use of bibliometrics followed by a
peer-review process of candidates' CVs. The NSQ divides academic disciplines
into two categories, i.e. citation-based disciplines (CDs) and
non-citation-based disciplines (NDs), a division that affects the metrics used
for assessing the candidates of that discipline in the first part of the
process, which is based on bibliometrics. In this work, we aim at exploring
whether citation-based metrics, calculated only considering open bibliographic
and citation data, can support the human peer-review of NDs and yield insights
on how it is conducted. To understand if and what citation-based (and,
possibly, other) metrics provide relevant information, we created a series of
machine learning models to replicate the decisions of the NSQ committees. As
one of the main outcomes of our study, we noticed that the strength of the
citational relationship between the candidate and the commission in charge of
assessing his/her CV seems to play a role in the peer-review phase of the NSQ
of NDs.
Related papers
- Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - Why do you cite? An investigation on citation intents and decision-making classification processes [1.7812428873698407]
This study emphasizes the importance of trustfully classifying citation intents.
We present a study utilizing advanced Ensemble Strategies for Citation Intent Classification (CIC)
One of our models sets as a new state-of-the-art (SOTA) with an 89.46% Macro-F1 score on the SciCite benchmark.
arXiv Detail & Related papers (2024-07-18T09:29:33Z) - Is Reference Necessary in the Evaluation of NLG Systems? When and Where? [58.52957222172377]
We show that reference-free metrics exhibit a higher correlation with human judgment and greater sensitivity to deficiencies in language quality.
Our study can provide insight into the appropriate application of automatic metrics and the impact of metric choice on evaluation performance.
arXiv Detail & Related papers (2024-03-21T10:31:11Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Artificial intelligence technologies to support research assessment: A
review [10.203602318836444]
This literature review identifies indicators that associate with higher impact or higher quality research from article text.
It includes studies that used machine learning techniques to predict citation counts or quality scores for journal articles or conference papers.
arXiv Detail & Related papers (2022-12-11T06:58:39Z) - NLPeer: A Unified Resource for the Computational Study of Peer Review [58.71736531356398]
We introduce NLPeer -- the first ethically sourced multidomain corpus of more than 5k papers and 11k review reports from five different venues.
We augment previous peer review datasets to include parsed and structured paper representations, rich metadata and versioning information.
Our work paves the path towards systematic, multi-faceted, evidence-based study of peer review in NLP and beyond.
arXiv Detail & Related papers (2022-11-12T12:29:38Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Academics evaluating academics: a methodology to inform the review
process on top of open citations [1.911678487931003]
We explore whether citation-based metrics, calculated only considering open citation, provide data that can yield insights on how human peer-review of research assessment exercises is conducted.
We propose to use a series of machine learning models to replicate the decisions of the committees of the research assessment exercises.
arXiv Detail & Related papers (2021-06-10T13:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.