On predicting research grants productivity
- URL: http://arxiv.org/abs/2106.10700v1
- Date: Sun, 20 Jun 2021 14:21:46 GMT
- Title: On predicting research grants productivity
- Authors: Jorge A. V. Tohalino and Diego R. Amancio
- Abstract summary: We analyzed whether bibliometric features are able to predict the success of research grants.
Research subject and publication history play a role in predicting productivity.
While the best results outperformed text-based attributes, the evaluated features were not highly discriminative.
- Score: 0.38073142980732994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding the reasons associated with successful proposals is of
paramount importance to improve evaluation processes. In this context, we
analyzed whether bibliometric features are able to predict the success of
research grants. We extracted features aiming at characterizing the academic
history of Brazilian researchers, including research topics, affiliations,
number of publications and visibility. The extracted features were then used to
predict grants productivity via machine learning in three major research areas,
namely Medicine, Dentistry and Veterinary Medicine. We found that research
subject and publication history play a role in predicting productivity. In
addition, institution-based features turned out to be relevant when combined
with other features. While the best results outperformed text-based attributes,
the evaluated features were not highly discriminative. Our findings indicate
that predicting grants success, at least with the considered set of
bibliometric features, is not a trivial task.
Related papers
- Does the Order of Fine-tuning Matter and Why? [11.975836356680855]
We study the effect of fine-tuning multiple intermediate tasks and their ordering on target task performance.
Experimental results show that there is an impact of task ordering on target task performance by up to 6% of performance gain and up to 4% of performance loss.
arXiv Detail & Related papers (2024-10-03T19:07:14Z) - A Dataset for Evaluating LLM-based Evaluation Functions for Research Question Extraction Task [6.757249766769395]
This dataset consists of machine learning papers, RQ extracted from research papers by GPT-4, and human evaluations of the extracted RQ.
Using this dataset, we systematically compared recently proposed LLM-based evaluation functions for summarizations.
We found that none of the functions showed sufficiently high correlations with human evaluations.
arXiv Detail & Related papers (2024-09-10T21:54:46Z) - What Makes a Good Story and How Can We Measure It? A Comprehensive Survey of Story Evaluation [57.550045763103334]
evaluating a story can be more challenging than other generation evaluation tasks.
We first summarize existing storytelling tasks, including text-to-text, visual-to-text, and text-to-visual.
We propose a taxonomy to organize evaluation metrics that have been developed or can be adopted for story evaluation.
arXiv Detail & Related papers (2024-08-26T20:35:42Z) - Efficient Systematic Reviews: Literature Filtering with Transformers & Transfer Learning [0.0]
It comes with an increasing burden in terms of the time required to identify the important articles of research for a given topic.
We develop a method for building a general-purpose filtering system that matches a research question, posed as a natural language description of the required content.
Our results demonstrate that transformer models, pre-trained on biomedical literature, and then fine tuned for the specific task, offer a promising solution to this problem.
arXiv Detail & Related papers (2024-05-30T02:55:49Z) - ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models [56.08917291606421]
ResearchAgent is a large language model-powered research idea writing agent.
It generates problems, methods, and experiment designs while iteratively refining them based on scientific literature.
We experimentally validate our ResearchAgent on scientific publications across multiple disciplines.
arXiv Detail & Related papers (2024-04-11T13:36:29Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Artificial Text Detection via Examining the Topology of Attention Maps [58.46367297712477]
We propose three novel types of interpretable topological features for this task based on Topological Data Analysis (TDA)
We empirically show that the features derived from the BERT model outperform count- and neural-based baselines up to 10% on three common datasets.
The probing analysis of the features reveals their sensitivity to the surface and syntactic properties.
arXiv Detail & Related papers (2021-09-10T12:13:45Z) - CitationIE: Leveraging the Citation Graph for Scientific Information
Extraction [89.33938657493765]
We use the citation graph of referential links between citing and cited papers.
We observe a sizable improvement in end-to-end information extraction over the state-of-the-art.
arXiv Detail & Related papers (2021-06-03T03:00:12Z) - Predicting the Reproducibility of Social and Behavioral Science Papers
Using Supervised Learning Models [21.69933721765681]
We propose a framework that extracts five types of features from scholarly work that can be used to support assessments of published research claims.
We analyze pairwise correlations between individual features and their importance for predicting a set of human-assessed ground truth labels.
arXiv Detail & Related papers (2021-04-08T00:45:20Z) - Knowledge Graphs in Manufacturing and Production: A Systematic
Literature Review [0.0]
Knowledge graphs in manufacturing and production aim to make production lines more efficient and flexible with higher quality output.
This makes knowledge graphs attractive for companies to reach Industry 4.0 goals.
Existing research in the field is quite preliminary, and more research effort on analyzing how knowledge graphs can be applied is needed.
arXiv Detail & Related papers (2020-12-16T16:15:28Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.