Artificial intelligence technologies to support research assessment: A
review
- URL: http://arxiv.org/abs/2212.06574v1
- Date: Sun, 11 Dec 2022 06:58:39 GMT
- Title: Artificial intelligence technologies to support research assessment: A
review
- Authors: Kayvan Kousha, Mike Thelwall
- Abstract summary: This literature review identifies indicators that associate with higher impact or higher quality research from article text.
It includes studies that used machine learning techniques to predict citation counts or quality scores for journal articles or conference papers.
- Score: 10.203602318836444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This literature review identifies indicators that associate with higher
impact or higher quality research from article text (e.g., titles, abstracts,
lengths, cited references and readability) or metadata (e.g., the number of
authors, international or domestic collaborations, journal impact factors and
authors' h-index). This includes studies that used machine learning techniques
to predict citation counts or quality scores for journal articles or conference
papers. The literature review also includes evidence about the strength of
association between bibliometric indicators and quality score rankings from
previous UK Research Assessment Exercises (RAEs) and REFs in different subjects
and years and similar evidence from other countries (e.g., Australia and
Italy). In support of this, the document also surveys studies that used public
datasets of citations, social media indictors or open review texts (e.g.,
Dimensions, OpenCitations, Altmetric.com and Publons) to help predict the
scholarly impact of articles. The results of this part of the literature review
were used to inform the experiments using machine learning to predict REF
journal article quality scores, as reported in the AI experiments report for
this project. The literature review also covers technology to automate
editorial processes, to provide quality control for papers and reviewers'
suggestions, to match reviewers with articles, and to automatically categorise
journal articles into fields. Bias and transparency in technology assisted
assessment are also discussed.
Related papers
- Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - SyROCCo: Enhancing Systematic Reviews using Machine Learning [6.805429133535976]
This paper explores the use of machine learning techniques to help navigate the systematic review process.
The application of ML techniques to subsequent stages of a review, such as data extraction and evidence mapping, is in its infancy.
arXiv Detail & Related papers (2024-06-24T11:04:43Z) - CASIMIR: A Corpus of Scientific Articles enhanced with Multiple Author-Integrated Revisions [7.503795054002406]
We propose an original textual resource on the revision step of the writing process of scientific articles.
This new dataset, called CASIMIR, contains the multiple revised versions of 15,646 scientific articles from OpenReview, along with their peer reviews.
arXiv Detail & Related papers (2024-03-01T03:07:32Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Towards a Quality Indicator for Research Data publications and Research
Software publications -- A vision from the Helmholtz Association [0.24848203755267903]
There is not yet an established process to assess and evaluate quality of research data and research software publications.
The Task Group Quality Indicators for Data and Software Publications currently develops a quality indicator for research data and research software publications.
arXiv Detail & Related papers (2024-01-16T20:00:27Z) - Application of Transformers based methods in Electronic Medical Records:
A Systematic Literature Review [77.34726150561087]
This work presents a systematic literature review of state-of-the-art advances using transformer-based methods on electronic medical records (EMRs) in different NLP tasks.
arXiv Detail & Related papers (2023-04-05T22:19:42Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Tag-Aware Document Representation for Research Paper Recommendation [68.8204255655161]
We propose a hybrid approach that leverages deep semantic representation of research papers based on social tags assigned by users.
The proposed model is effective in recommending research papers even when the rating data is very sparse.
arXiv Detail & Related papers (2022-09-08T09:13:07Z) - Ranking Scientific Papers Using Preference Learning [48.78161994501516]
We cast it as a paper ranking problem based on peer review texts and reviewer scores.
We introduce a novel, multi-faceted generic evaluation framework for making final decisions based on peer reviews.
arXiv Detail & Related papers (2021-09-02T19:41:47Z) - Do open citations give insights on the qualitative peer-review
evaluation in research assessments? An analysis of the Italian National
Scientific Qualification [1.911678487931003]
The Italian National Scientific Qualification (NSQ) aims at deciding whether a scholar can apply to professorial academic positions.
It makes use of bibliometrics followed by a peer-review process of candidates' CVs.
We explore whether citation-based metrics, calculated only considering open and citation data, can support the human peer-review of NDs.
arXiv Detail & Related papers (2021-03-14T14:44:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.