Machine Learning vs. Deep Learning in 5G Networks -- A Comparison of
Scientific Impact
- URL: http://arxiv.org/abs/2210.07327v1
- Date: Thu, 13 Oct 2022 19:54:17 GMT
- Title: Machine Learning vs. Deep Learning in 5G Networks -- A Comparison of
Scientific Impact
- Authors: Ilker Turker, Serhat Orkun Tan
- Abstract summary: Machine learning (ML) and deep learning (DL) techniques are used in 5G networks.
Our study aims to uncover the differences in scientific impact for these two techniques by the means of statistical bibliometrics.
Web of Science (WoS) database host 2245 papers for ML and 1407 papers for DL-related studies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Introduction of fifth generation (5G) wireless network technology has matched
the crucial need for high capacity and speed needs of the new generation mobile
applications. Recent advances in Artificial Intelligence (AI) also empowered 5G
cellular networks with two mainstreams as machine learning (ML) and deep
learning (DL) techniques. Our study aims to uncover the differences in
scientific impact for these two techniques by the means of statistical
bibliometrics. The performed analysis includes citation performance with
respect to indexing types, funding availability, journal or conference
publishing options together with distributions of these metrics along years to
evaluate the popularity trends in a detailed manner. Web of Science (WoS)
database host 2245 papers for ML and 1407 papers for DL-related studies. DL
studies, starting with 9% rate in 2013, has reached to 45% rate in 2022 among
all DL and ML-related studies. Results related to scientific impact indicate
that DL studies get slightly more average normalized citation (2.256) compared
to ML studies (2.118) in 5G, while SCI-Expanded indexed papers in both sides
tend to have similar citation performance (3.165 and 3.162 respectively).
ML-related studies those are indexed in ESCI show twice citation performance
compared to DL. Conference papers in DL domain and journal papers in ML domain
are superior in scientific interest to their counterparts with minor
differences. Highest citation performance for ML studies is achieved for year
2014, while this peak is observed for 2017 for DL studies. We can conclude that
both publication and citation rate for DL-related papers tend to increase and
outperform ML-based studies in 5G domain by the means of citation metrics.
Related papers
- A Comprehensive Survey of Scientific Large Language Models and Their Applications in Scientific Discovery [68.48094108571432]
Large language models (LLMs) have revolutionized the way text and other modalities of data are handled.
We aim to provide a more holistic view of the research landscape by unveiling cross-field and cross-modal connections between scientific LLMs.
arXiv Detail & Related papers (2024-06-16T08:03:24Z) - MASSW: A New Dataset and Benchmark Tasks for AI-Assisted Scientific Workflows [58.56005277371235]
We introduce MASSW, a comprehensive text dataset on Multi-Aspect Summarization of ScientificAspects.
MASSW includes more than 152,000 peer-reviewed publications from 17 leading computer science conferences spanning the past 50 years.
We demonstrate the utility of MASSW through multiple novel machine-learning tasks that can be benchmarked using this new dataset.
arXiv Detail & Related papers (2024-06-10T15:19:09Z) - MSciNLI: A Diverse Benchmark for Scientific Natural Language Inference [65.37685198688538]
This paper presents MSciNLI, a dataset containing 132,320 sentence pairs extracted from five new scientific domains.
We establish strong baselines on MSciNLI by fine-tuning Pre-trained Language Models (PLMs) and prompting Large Language Models (LLMs)
We show that domain shift degrades the performance of scientific NLI models which demonstrates the diverse characteristics of different domains in our dataset.
arXiv Detail & Related papers (2024-04-11T18:12:12Z) - Mapping the Increasing Use of LLMs in Scientific Papers [99.67983375899719]
We conduct the first systematic, large-scale analysis across 950,965 papers published between January 2020 and February 2024 on the arXiv, bioRxiv, and Nature portfolio journals.
Our findings reveal a steady increase in LLM usage, with the largest and fastest growth observed in Computer Science papers.
arXiv Detail & Related papers (2024-04-01T17:45:15Z) - Position: AI/ML Influencers Have a Place in the Academic Process [82.2069685579588]
We investigate the role of social media influencers in enhancing the visibility of machine learning research.
We have compiled a comprehensive dataset of over 8,000 papers, spanning tweets from December 2018 to October 2023.
Our statistical and causal inference analysis reveals a significant increase in citations for papers endorsed by these influencers.
arXiv Detail & Related papers (2024-01-24T20:05:49Z) - On the Readiness of Scientific Data for a Fair and Transparent Use in
Machine Learning [1.961305559606562]
We analyze how scientific data documentation meets the needs of the machine learning community and regulatory bodies for its use in ML technologies.
We examine a sample of 4041 data papers of different domains, assessing their completeness and coverage of the requested dimensions.
We propose a set of recommendation guidelines for data creators and scientific data publishers to increase their data's preparedness for its transparent and fairer use in ML technologies.
arXiv Detail & Related papers (2024-01-18T12:11:27Z) - Scientific Impact of Graph-Based Approaches in Deep Learning Studies --
A Bibliometric Comparison [0.0]
It's outlined that deep learning-based studies gained momentum after year 2013, and the rate of graph-based approaches in all deep learning studies increased linearly from 1% to 4% within the following 10 years.
Despite their similar performance in recent years, graph-based studies show twice more citation performance as they get older, compared to traditional approaches.
arXiv Detail & Related papers (2022-10-13T20:23:43Z) - Deep Graph Learning for Anomalous Citation Detection [55.81334139806342]
We propose a novel deep graph learning model, namely GLAD (Graph Learning for Anomaly Detection), to identify anomalies in citation networks.
Within the GLAD framework, we propose an algorithm called CPU (Citation PUrpose) to discover the purpose of citation based on citation texts.
arXiv Detail & Related papers (2022-02-23T09:05:28Z) - Deep Learning in Science [0.0]
This paper provides insights on the diffusion and impact of Deep Learning in science.
We use a Natural Language Processing (NLP) approach on the arXiv.org publication corpus.
Our findings suggest that DL does not (yet?) work as an autopilot to navigate complex knowledge landscapes and overthrow their structure.
arXiv Detail & Related papers (2020-09-03T10:41:29Z) - Synergy between Machine/Deep Learning and Software Engineering: How Far
Are We? [35.606916133846966]
Since 2009, the deep learning revolution has stimulated the synergy between Machine Learning (ML)/Deep Learning (DL) and Software Engineering (SE)
We conducted a 10-year Systematic Literature Review on 906 ML/DL-related SE papers published between 2009 and 2018.
arXiv Detail & Related papers (2020-08-12T18:19:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.