The Values and Limits of Altmetrics
- URL: http://arxiv.org/abs/2112.09565v1
- Date: Fri, 17 Dec 2021 15:21:35 GMT
- Title: The Values and Limits of Altmetrics
- Authors: Grischa Fraumann
- Abstract summary: Some stakeholders in higher education have championed altmetrics as a new way to understand research impact.
This chapter explores the values and limits of altmetrics, including their role in evaluating, promoting, and disseminating research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Altmetrics are tools for measuring the impact of research beyond scientific
communities. In general, they measure online mentions of scholarly outputs,
such as on online social networks, blogs, and news sites. Some stakeholders in
higher education have championed altmetrics as a new way to understand research
impact and as an alternative or supplement to bibliometrics. Contrastingly,
others have criticized altmetrics for being ill conceived and limited in their
use. This chapter explores the values and limits of altmetrics, including their
role in evaluating, promoting, and disseminating research.
Related papers
- Evaluating Generative AI Systems is a Social Science Measurement Challenge [78.35388859345056]
We present a framework for measuring concepts related to the capabilities, impacts, opportunities, and risks of GenAI systems.
The framework distinguishes between four levels: the background concept, the systematized concept, the measurement instrument(s), and the instance-level measurements themselves.
arXiv Detail & Related papers (2024-11-17T02:35:30Z) - Measuring publication relatedness using controlled vocabularies [0.0]
Controlled vocabularies provide a promising basis for measuring relatedness.
There exists no comprehensive and direct test of their accuracy and suitability for different types of research questions.
This paper reviews existing measures, develops a new measure, and benchmarks the measures using TREC Genomics data as a ground truth of topics.
arXiv Detail & Related papers (2024-08-27T12:41:37Z) - Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics Fall In! [80.3129093617928]
Annually, at the Conference of Machine Translation (WMT), the Metrics Shared Task organizers conduct the meta-evaluation of Machine Translation (MT) metrics.
This work highlights two issues with the meta-evaluation framework currently employed in WMT, and assesses their impact on the metrics rankings.
We introduce the concept of sentinel metrics, which are designed explicitly to scrutinize the meta-evaluation process's accuracy, robustness, and fairness.
arXiv Detail & Related papers (2024-08-25T13:29:34Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [58.6354685593418]
This paper proposes several article-level, field-normalized, and large language model-empowered bibliometric indicators to evaluate reviews.
The newly emerging AI-generated literature reviews are also appraised.
This work offers insights into the current challenges of literature reviews and envisions future directions for their development.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Artificial intelligence technologies to support research assessment: A
review [10.203602318836444]
This literature review identifies indicators that associate with higher impact or higher quality research from article text.
It includes studies that used machine learning techniques to predict citation counts or quality scores for journal articles or conference papers.
arXiv Detail & Related papers (2022-12-11T06:58:39Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - A Measure of Research Taste [91.3755431537592]
We present a citation-based measure that rewards both productivity and taste.
The presented measure, CAP, balances the impact of publications and their quantity.
We analyze the characteristics of CAP for highly-cited researchers in biology, computer science, economics, and physics.
arXiv Detail & Related papers (2021-05-17T18:01:47Z) - Quantitative Evaluations on Saliency Methods: An Experimental Study [6.290238942982972]
We briefly summarize the status quo of the metrics, including faithfulness, localization, false-positives, sensitivity check, and stability.
We conclude that among all the methods we compare, no single explanation method dominates others in all metrics.
arXiv Detail & Related papers (2020-12-31T14:13:30Z) - Early Indicators of Scientific Impact: Predicting Citations with
Altmetrics [0.0]
We use altmetrics to predict the short-term and long-term citations that a scholarly publication could receive.
We build various classification and regression models and evaluate their performance, finding neural networks and ensemble models to perform best for these tasks.
arXiv Detail & Related papers (2020-12-25T16:25:07Z) - How do academic topics shift across altmetric sources? A case study of
the research area of Big Data [2.208242292882514]
Author keywords from publications and terms from online events are extracted as the main topics of the publications and the online discussion of their audiences at Altmetric.
Results show there are substantial differences between the two sets of topics around Big Data scientific research.
Blogs and News show a strong similarity in the terms commonly used, while Policy documents and Wikipedia articles exhibit the strongest dissimilarity in considering and interpreting Big Data related research.
arXiv Detail & Related papers (2020-03-23T19:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.