A Measure of Research Taste
- URL: http://arxiv.org/abs/2105.08089v1
- Date: Mon, 17 May 2021 18:01:47 GMT
- Title: A Measure of Research Taste
- Authors: Vladlen Koltun and David Hafner
- Abstract summary: We present a citation-based measure that rewards both productivity and taste.
The presented measure, CAP, balances the impact of publications and their quantity.
We analyze the characteristics of CAP for highly-cited researchers in biology, computer science, economics, and physics.
- Score: 91.3755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Researchers are often evaluated by citation-based metrics. Such metrics can
inform hiring, promotion, and funding decisions. Concerns have been expressed
that popular citation-based metrics incentivize researchers to maximize the
production of publications. Such incentives may not be optimal for scientific
progress. Here we present a citation-based measure that rewards both
productivity and taste: the researcher's ability to focus on impactful
contributions. The presented measure, CAP, balances the impact of publications
and their quantity, thus incentivizing researchers to consider whether a
publication is a useful addition to the literature. CAP is simple,
interpretable, and parameter-free. We analyze the characteristics of CAP for
highly-cited researchers in biology, computer science, economics, and physics,
using a corpus of millions of publications and hundreds of millions of
citations with yearly temporal granularity. CAP produces qualitatively
plausible outcomes and has a number of advantages over prior metrics. Results
can be explored at https://cap-measure.org/
Related papers
- Predicting Award Winning Research Papers at Publication Time [5.318302558039615]
We predict the likelihood for a research paper of winning an award only relying on information available at publication time.
We made our experiments considering the ArnetMiner citation graph, while the ground truth on award-winning papers has been obtained from a collection of best paper awards from 32 computer science conferences.
Remarkably, The high recall and the low false negatives rate, show how the model performs very well at identifying papers that will not win an award.
arXiv Detail & Related papers (2024-06-18T12:09:10Z) - Mapping the Increasing Use of LLMs in Scientific Papers [99.67983375899719]
We conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals.
Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers.
arXiv Detail & Related papers (2024-04-01T17:45:15Z) - Position: AI/ML Influencers Have a Place in the Academic Process [82.2069685579588]
We investigate the role of social media influencers in enhancing the visibility of machine learning research.
We have compiled a comprehensive dataset of over 8,000 papers, spanning tweets from December 2018 to October 2023.
Our statistical and causal inference analysis reveals a significant increase in citations for papers endorsed by these influencers.
arXiv Detail & Related papers (2024-01-24T20:05:49Z) - Analyzing the Impact of Companies on AI Research Based on Publications [1.450405446885067]
We compare academic- and company-authored AI publications published in the last decade.
We find that the citation count an individual publication receives is significantly higher when it is (co-authored) by a company.
arXiv Detail & Related papers (2023-10-31T13:27:04Z) - Citation Trajectory Prediction via Publication Influence Representation
Using Temporal Knowledge Graph [52.07771598974385]
Existing approaches mainly rely on mining temporal and graph data from academic articles.
Our framework is composed of three modules: difference-preserved graph embedding, fine-grained influence representation, and learning-based trajectory calculation.
Experiments are conducted on both the APS academic dataset and our contributed AIPatent dataset.
arXiv Detail & Related papers (2022-10-02T07:43:26Z) - CitationIE: Leveraging the Citation Graph for Scientific Information
Extraction [89.33938657493765]
We use the citation graph of referential links between citing and cited papers.
We observe a sizable improvement in end-to-end information extraction over the state-of-the-art.
arXiv Detail & Related papers (2021-06-03T03:00:12Z) - A Graph Convolutional Neural Network based Framework for Estimating
Future Citations Count of Research Articles [0.03937354192623676]
We propose a Graph Convolutional Network (GCN) based framework for estimating future research publication citations for both the short-term (1-year) and long-term (for 5-years and 10-years) duration.
We have tested our proposed approach over the AMiner dataset, specifically on research articles from the computer science domain, consisting of more than 0.8 million articles.
arXiv Detail & Related papers (2021-04-11T07:20:53Z) - Enhancing Scientific Papers Summarization with Citation Graph [78.65955304229863]
We redefine the task of scientific papers summarization by utilizing their citation graph.
We construct a novel scientific papers summarization dataset Semantic Scholar Network (SSN) which contains 141K research papers in different domains.
Our model can achieve competitive performance when compared with the pretrained models.
arXiv Detail & Related papers (2021-04-07T11:13:35Z) - Early Indicators of Scientific Impact: Predicting Citations with
Altmetrics [0.0]
We use altmetrics to predict the short-term and long-term citations that a scholarly publication could receive.
We build various classification and regression models and evaluate their performance, finding neural networks and ensemble models to perform best for these tasks.
arXiv Detail & Related papers (2020-12-25T16:25:07Z) - Utilizing Citation Network Structure to Predict Citation Counts: A Deep
Learning Approach [0.0]
This paper proposes an end-to-end deep learning network, DeepCCP, which combines the effect of information cascade and looks at the citation counts prediction problem.
According to experiments on 6 real data sets, DeepCCP is superior to the state-of-the-art methods in terms of the accuracy of citation count prediction.
arXiv Detail & Related papers (2020-09-06T05:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.