NLLG Quarterly arXiv Report 09/24: What are the most influential current AI Papers?
- URL: http://arxiv.org/abs/2412.12121v1
- Date: Mon, 02 Dec 2024 22:10:38 GMT
- Title: NLLG Quarterly arXiv Report 09/24: What are the most influential current AI Papers?
- Authors: Christoph Leiter, Jonas Belouadi, Yanran Chen, Ran Zhang, Daniil Larionov, Aida Kostikova, Steffen Eger,
- Abstract summary: The NLLG arXiv reports assist in navigating the rapidly evolving landscape of NLP and AI research across cs.CL, cs.CV, cs.AI, and cs.LG categories.
This fourth installment captures a transformative period in AI history - from January 1, 2023, following ChatGPT's debut, through September 30, 2024.
Our analysis reveals substantial new developments in the field - with 45% of the top 40 most-cited papers being new entries since our last report.
- Score: 21.68589129842815
- License:
- Abstract: The NLLG (Natural Language Learning & Generation) arXiv reports assist in navigating the rapidly evolving landscape of NLP and AI research across cs.CL, cs.CV, cs.AI, and cs.LG categories. This fourth installment captures a transformative period in AI history - from January 1, 2023, following ChatGPT's debut, through September 30, 2024. Our analysis reveals substantial new developments in the field - with 45% of the top 40 most-cited papers being new entries since our last report eight months ago and offers insights into emerging trends and major breakthroughs, such as novel multimodal architectures, including diffusion and state space models. Natural Language Processing (NLP; cs.CL) remains the dominant main category in the list of our top-40 papers but its dominance is on the decline in favor of Computer vision (cs.CV) and general machine learning (cs.LG). This report also presents novel findings on the integration of generative AI in academic writing, documenting its increasing adoption since 2022 while revealing an intriguing pattern: top-cited papers show notably fewer markers of AI-generated content compared to random samples. Furthermore, we track the evolution of AI-associated language, identifying declining trends in previously common indicators such as "delve".
Related papers
- Neuro-Symbolic AI in 2024: A Systematic Review [0.29260385019352086]
The review followed the PRISMA methodology, utilizing databases such as IEEE Explore, Google Scholar, arXiv, ACM, and SpringerLink.
From an initial pool of 1,428 papers, 167 met the inclusion criteria and were analyzed in detail.
The majority of research efforts are concentrated in the areas of learning and inference, logic and reasoning, and knowledge representation.
arXiv Detail & Related papers (2025-01-09T18:48:35Z) - What fifty-one years of Linguistics and Artificial Intelligence research tell us about their correlation: A scientometric review [0.0]
This study provides a thorough scientometric analysis of this correlation, synthesizing the intellectual production during 51 years, from 1974 to 2024.
It involves 5750 Web of Science-indexed articles published in 2124 journals, which are written by 20835 authors.
Results indicate that in the 1980s and 1990s, linguistics and AI research was not robust, characterized by unstable publication over time.
It has, however, witnessed a remarkable increase of publication since then, reaching 1478 articles in 2023, and 546 articles in January-March timespan in 2024.
arXiv Detail & Related papers (2024-11-29T17:12:06Z) - Artificial Intelligence Index Report 2024 [15.531650534547945]
The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI)
The AI Index is recognized globally as one of the most credible and authoritative sources for data and insights on AI.
This year's edition surpasses all previous ones in size, scale, and scope, reflecting the growing significance that AI is coming to hold in all of our lives.
arXiv Detail & Related papers (2024-05-29T20:59:57Z) - Mapping the Increasing Use of LLMs in Scientific Papers [99.67983375899719]
We conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals.
Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers.
arXiv Detail & Related papers (2024-04-01T17:45:15Z) - Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews [51.453135368388686]
We present an approach for estimating the fraction of text in a large corpus which is likely to be substantially modified or produced by a large language model (LLM)
Our maximum likelihood model leverages expert-written and AI-generated reference texts to accurately and efficiently examine real-world LLM-use at the corpus level.
arXiv Detail & Related papers (2024-03-11T21:51:39Z) - NLLG Quarterly arXiv Report 09/23: What are the most influential current
AI Papers? [21.68589129842815]
The US dominates among both top-40 and top-9k papers, followed by China.
Europe clearly lags behind and is hardly represented in the top-40 most cited papers.
US industry is largely overrepresented in the top-40 most influential papers.
arXiv Detail & Related papers (2023-12-09T21:42:20Z) - NLLG Quarterly arXiv Report 06/23: What are the most influential current
AI Papers? [15.830129136642755]
The objective is to offer a quick guide to the most relevant and widely discussed research, aiding both newcomers and established researchers in staying abreast of current trends.
We observe the dominance of papers related to Large Language Models (LLMs) and specifically ChatGPT during the first half of 2023.
NLP related papers are the most influential (around 60% of top papers) even though there are twice as many ML related papers in our data.
arXiv Detail & Related papers (2023-07-31T11:53:52Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - A Comprehensive Survey of AI-Generated Content (AIGC): A History of
Generative AI from GAN to ChatGPT [63.58711128819828]
ChatGPT and other Generative AI (GAI) techniques belong to the category of Artificial Intelligence Generated Content (AIGC)
The goal of AIGC is to make the content creation process more efficient and accessible, allowing for the production of high-quality content at a faster pace.
arXiv Detail & Related papers (2023-03-07T20:36:13Z) - State-of-the-art generalisation research in NLP: A taxonomy and review [87.1541712509283]
We present a taxonomy for characterising and understanding generalisation research in NLP.
Our taxonomy is based on an extensive literature review of generalisation research.
We use our taxonomy to classify over 400 papers that test generalisation.
arXiv Detail & Related papers (2022-10-06T16:53:33Z) - STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits [60.37683428887577]
We present a novel network called STEP, to classify perceived human emotion from gaits.
We use hundreds of annotated real-world gait videos and augment them with thousands of annotated synthetic gaits.
STEP can learn the affective features and exhibits classification accuracy of 89% on E-Gait, which is 14 - 30% more accurate over prior methods.
arXiv Detail & Related papers (2019-10-28T18:43:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.