Examining Citations of Natural Language Processing Literature
- URL: http://arxiv.org/abs/2005.00912v1
- Date: Sat, 2 May 2020 20:01:59 GMT
- Title: Examining Citations of Natural Language Processing Literature
- Authors: Saif M. Mohammad
- Abstract summary: We show that only about 56% of the papers in AA are cited ten or more times.
CL Journal has the most cited papers, but its citation dominance has lessened in recent years.
papers on sentiment classification, anaphora resolution, and entity recognition have the highest median citations.
- Score: 31.87319293259599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We extracted information from the ACL Anthology (AA) and Google Scholar (GS)
to examine trends in citations of NLP papers. We explore questions such as: how
well cited are papers of different types (journal articles, conference papers,
demo papers, etc.)? how well cited are papers from different areas of within
NLP? etc. Notably, we show that only about 56\% of the papers in AA are cited
ten or more times. CL Journal has the most cited papers, but its citation
dominance has lessened in recent years. On average, long papers get almost
three times as many citations as short papers; and papers on sentiment
classification, anaphora resolution, and entity recognition have the highest
median citations. The analyses presented here, and the associated dataset of
NLP papers mapped to citations, have a number of uses including: understanding
how the field is growing and quantifying the impact of different types of
papers.
Related papers
- Position: AI/ML Influencers Have a Place in the Academic Process [82.2069685579588]
We investigate the role of social media influencers in enhancing the visibility of machine learning research.
We have compiled a comprehensive dataset of over 8,000 papers, spanning tweets from December 2018 to October 2023.
Our statistical and causal inference analysis reveals a significant increase in citations for papers endorsed by these influencers.
arXiv Detail & Related papers (2024-01-24T20:05:49Z) - CausalCite: A Causal Formulation of Paper Citations [80.82622421055734]
CausalCite is a new way to measure the significance of a paper by assessing the causal impact of the paper on its follow-up papers.
It is based on a novel causal inference method, TextMatch, which adapts the traditional matching framework to high-dimensional text embeddings.
We demonstrate the effectiveness of CausalCite on various criteria, such as high correlation with paper impact as reported by scientific experts.
arXiv Detail & Related papers (2023-11-05T23:09:39Z) - NLLG Quarterly arXiv Report 06/23: What are the most influential current
AI Papers? [15.830129136642755]
The objective is to offer a quick guide to the most relevant and widely discussed research, aiding both newcomers and established researchers in staying abreast of current trends.
We observe the dominance of papers related to Large Language Models (LLMs) and specifically ChatGPT during the first half of 2023.
NLP related papers are the most influential (around 60% of top papers) even though there are twice as many ML related papers in our data.
arXiv Detail & Related papers (2023-07-31T11:53:52Z) - Forgotten Knowledge: Examining the Citational Amnesia in NLP [63.13508571014673]
We show how far back in time do we tend to go to cite papers? How has that changed over time, and what factors correlate with this citational attention/amnesia?
We show that around 62% of cited papers are from the immediate five years prior to publication, whereas only about 17% are more than ten years old.
We show that the median age and age diversity of cited papers were steadily increasing from 1990 to 2014, but since then, the trend has reversed, and current NLP papers have an all-time low temporal citation diversity.
arXiv Detail & Related papers (2023-05-29T18:30:34Z) - Geographic Citation Gaps in NLP Research [63.13508571014673]
This work asks a series of questions on the relationship between geographical location and publication success.
We first created a dataset of 70,000 papers from the ACL Anthology, extracted their meta-information, and generated their citation network.
We show that not only are there substantial geographical disparities in paper acceptance and citation but also that these disparities persist even when controlling for a number of variables such as venue of publication and sub-field of NLP.
arXiv Detail & Related papers (2022-10-26T02:25:23Z) - Cross-Lingual Citations in English Papers: A Large-Scale Analysis of
Prevalence, Usage, and Impact [0.0]
We present an analysis of cross-lingual citations based on over one million English papers.
Among our findings are an increasing rate of citations to publications written in Chinese.
To facilitate further research, we make our collected data and source code publicly available.
arXiv Detail & Related papers (2021-11-07T15:34:02Z) - Enhancing Scientific Papers Summarization with Citation Graph [78.65955304229863]
We redefine the task of scientific papers summarization by utilizing their citation graph.
We construct a novel scientific papers summarization dataset Semantic Scholar Network (SSN) which contains 141K research papers in different domains.
Our model can achieve competitive performance when compared with the pretrained models.
arXiv Detail & Related papers (2021-04-07T11:13:35Z) - Utilizing Citation Network Structure to Predict Citation Counts: A Deep
Learning Approach [0.0]
This paper proposes an end-to-end deep learning network, DeepCCP, which combines the effect of information cascade and looks at the citation counts prediction problem.
According to experiments on 6 real data sets, DeepCCP is superior to the state-of-the-art methods in terms of the accuracy of citation count prediction.
arXiv Detail & Related papers (2020-09-06T05:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.