Academics evaluating academics: a methodology to inform the review
process on top of open citations
- URL: http://arxiv.org/abs/2106.05725v1
- Date: Thu, 10 Jun 2021 13:09:15 GMT
- Title: Academics evaluating academics: a methodology to inform the review
process on top of open citations
- Authors: Federica Bologna, Angelo Di Iorio, Silvio Peroni, Francesco Poggi
- Abstract summary: We explore whether citation-based metrics, calculated only considering open citation, provide data that can yield insights on how human peer-review of research assessment exercises is conducted.
We propose to use a series of machine learning models to replicate the decisions of the committees of the research assessment exercises.
- Score: 1.911678487931003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past, several works have investigated ways for combining quantitative
and qualitative methods in research assessment exercises. In this work, we aim
at introducing a methodology to explore whether citation-based metrics,
calculated only considering open bibliographic and citation data, can yield
insights on how human peer-review of research assessment exercises is
conducted. To understand if and what metrics provide relevant information, we
propose to use a series of machine learning models to replicate the decisions
of the committees of the research assessment exercises.
Related papers
- BEExAI: Benchmark to Evaluate Explainable AI [0.9176056742068812]
We propose BEExAI, a benchmark tool that allows large-scale comparison of different post-hoc XAI methods.
We argue that the need for a reliable way of measuring the quality and correctness of explanations is becoming critical.
arXiv Detail & Related papers (2024-07-29T11:21:17Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - A Review on Method Entities in the Academic Literature: Extraction,
Evaluation, and Application [15.217159196570108]
In scientific research, the method is an indispensable means to solve scientific problems and a critical research object.
Key entities in academic literature reflecting names of the method are called method entities.
The evolution of method entities can reveal the development of a discipline and facilitate knowledge discovery.
arXiv Detail & Related papers (2022-09-08T10:12:21Z) - Tag-Aware Document Representation for Research Paper Recommendation [68.8204255655161]
We propose a hybrid approach that leverages deep semantic representation of research papers based on social tags assigned by users.
The proposed model is effective in recommending research papers even when the rating data is very sparse.
arXiv Detail & Related papers (2022-09-08T09:13:07Z) - On the role of benchmarking data sets and simulations in method
comparison studies [0.0]
This paper investigates differences and similarities between simulation studies and benchmarking studies.
We borrow ideas from different contexts such as mixed methods research and Clinical Scenario Evaluation.
arXiv Detail & Related papers (2022-08-02T13:47:53Z) - Deep Learning Schema-based Event Extraction: Literature Review and
Current Trends [60.29289298349322]
Event extraction technology based on deep learning has become a research hotspot.
This paper fills the gap by reviewing the state-of-the-art approaches, focusing on deep learning-based models.
arXiv Detail & Related papers (2021-07-05T16:32:45Z) - Do open citations give insights on the qualitative peer-review
evaluation in research assessments? An analysis of the Italian National
Scientific Qualification [1.911678487931003]
The Italian National Scientific Qualification (NSQ) aims at deciding whether a scholar can apply to professorial academic positions.
It makes use of bibliometrics followed by a peer-review process of candidates' CVs.
We explore whether citation-based metrics, calculated only considering open and citation data, can support the human peer-review of NDs.
arXiv Detail & Related papers (2021-03-14T14:44:45Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.