Academics evaluating academics: a methodology to inform the review
process on top of open citations
- URL: http://arxiv.org/abs/2106.05725v1
- Date: Thu, 10 Jun 2021 13:09:15 GMT
- Title: Academics evaluating academics: a methodology to inform the review
process on top of open citations
- Authors: Federica Bologna, Angelo Di Iorio, Silvio Peroni, Francesco Poggi
- Abstract summary: We explore whether citation-based metrics, calculated only considering open citation, provide data that can yield insights on how human peer-review of research assessment exercises is conducted.
We propose to use a series of machine learning models to replicate the decisions of the committees of the research assessment exercises.
- Score: 1.911678487931003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past, several works have investigated ways for combining quantitative
and qualitative methods in research assessment exercises. In this work, we aim
at introducing a methodology to explore whether citation-based metrics,
calculated only considering open bibliographic and citation data, can yield
insights on how human peer-review of research assessment exercises is
conducted. To understand if and what metrics provide relevant information, we
propose to use a series of machine learning models to replicate the decisions
of the committees of the research assessment exercises.
Related papers
- Time Series Embedding Methods for Classification Tasks: A Review [2.8084422332394428]
We present a comprehensive review and evaluation of time series embedding methods for effective representations in machine learning and deep learning models.
We introduce a taxonomy of embedding techniques, categorizing them based on their theoretical foundations and application contexts.
Our experimental results demonstrate that the performance of embedding methods varies significantly depending on the dataset and classification algorithm used.
arXiv Detail & Related papers (2025-01-23T05:24:45Z) - Benchmark for Evaluation and Analysis of Citation Recommendation Models [0.0]
We develop a benchmark specifically designed to analyze and compare citation recommendation models.
This benchmark will evaluate the performance of models on different features of the citation context.
This will enable meaningful comparisons and help identify promising approaches for further research and development in the field.
arXiv Detail & Related papers (2024-12-10T18:01:33Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - A Review on Method Entities in the Academic Literature: Extraction,
Evaluation, and Application [15.217159196570108]
In scientific research, the method is an indispensable means to solve scientific problems and a critical research object.
Key entities in academic literature reflecting names of the method are called method entities.
The evolution of method entities can reveal the development of a discipline and facilitate knowledge discovery.
arXiv Detail & Related papers (2022-09-08T10:12:21Z) - Tag-Aware Document Representation for Research Paper Recommendation [68.8204255655161]
We propose a hybrid approach that leverages deep semantic representation of research papers based on social tags assigned by users.
The proposed model is effective in recommending research papers even when the rating data is very sparse.
arXiv Detail & Related papers (2022-09-08T09:13:07Z) - On the role of benchmarking data sets and simulations in method
comparison studies [0.0]
This paper investigates differences and similarities between simulation studies and benchmarking studies.
We borrow ideas from different contexts such as mixed methods research and Clinical Scenario Evaluation.
arXiv Detail & Related papers (2022-08-02T13:47:53Z) - Deep Learning Schema-based Event Extraction: Literature Review and
Current Trends [60.29289298349322]
Event extraction technology based on deep learning has become a research hotspot.
This paper fills the gap by reviewing the state-of-the-art approaches, focusing on deep learning-based models.
arXiv Detail & Related papers (2021-07-05T16:32:45Z) - Do open citations give insights on the qualitative peer-review
evaluation in research assessments? An analysis of the Italian National
Scientific Qualification [1.911678487931003]
The Italian National Scientific Qualification (NSQ) aims at deciding whether a scholar can apply to professorial academic positions.
It makes use of bibliometrics followed by a peer-review process of candidates' CVs.
We explore whether citation-based metrics, calculated only considering open and citation data, can support the human peer-review of NDs.
arXiv Detail & Related papers (2021-03-14T14:44:45Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z) - A Survey on Causal Inference [64.45536158710014]
Causal inference is a critical research topic across many domains, such as statistics, computer science, education, public policy and economics.
Various causal effect estimation methods for observational data have sprung up.
arXiv Detail & Related papers (2020-02-05T21:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.