The Empowerment of Science of Science by Large Language Models: New Tools and Methods
- URL: http://arxiv.org/abs/2511.15370v1
- Date: Wed, 19 Nov 2025 11:57:22 GMT
- Title: The Empowerment of Science of Science by Large Language Models: New Tools and Methods
- Authors: Guoqiang Liang, Jingqian Gong, Mengxuan Li, Gege Lin, Shuo Zhang,
- Abstract summary: Large language models (LLMs) have exhibited exceptional capabilities in natural language understanding and generation, image recognition, and multimodal tasks.<n>This manuscript conducts a comprehensive review of the core technologies that support LLMs from a user standpoint, including prompt engineering, knowledge-enhanced retrieval augmented generation, fine tuning, pretraining, and tool learning.
- Score: 7.392576669260369
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have exhibited exceptional capabilities in natural language understanding and generation, image recognition, and multimodal tasks, charting a course towards AGI and emerging as a central issue in the global technological race. This manuscript conducts a comprehensive review of the core technologies that support LLMs from a user standpoint, including prompt engineering, knowledge-enhanced retrieval augmented generation, fine tuning, pretraining, and tool learning. Additionally, it traces the historical development of Science of Science (SciSci) and presents a forward looking perspective on the potential applications of LLMs within the scientometric domain. Furthermore, it discusses the prospect of an AI agent based model for scientific evaluation, and presents new research fronts detection and knowledge graph building methods with LLMs.
Related papers
- LLM-Supported Formal Knowledge Representation for Enhancing Control Engineering Content with an Interactive Semantic Layer [0.0]
This paper briefly describes an LLM-supported method for semi-automated generation of formal knowledge representations.<n>We demonstrate how language models can assist in transforming natural-language descriptions and mathematical definitions into a formalized knowledge graph.
arXiv Detail & Related papers (2025-11-04T17:36:57Z) - Continual Learning for Generative AI: From LLMs to MLLMs and Beyond [56.29231194002407]
We present a comprehensive survey of continual learning methods for mainstream generative AI models.<n>We categorize these approaches into three paradigms: architecture-based, regularization-based, and replay-based.<n>We analyze continual learning setups for different generative models, including training objectives, benchmarks, and core backbones.
arXiv Detail & Related papers (2025-06-16T02:27:25Z) - A Survey of Spatio-Temporal EEG data Analysis: from Models to Applications [20.54846023209402]
This survey focuses on emerging methods and technologies that are poised to transform our comprehension and interpretation of brain activity.
We delve into self-supervised learning methods that enable the robust representation of brain signals.
We also explore emerging discriminative methods, including graph neural networks (GNN), foundation models, and large language models (LLMs)-based approaches.
The survey provides an extensive overview of these cutting-edge techniques, their current applications, and the profound implications they hold for future research and clinical practice.
arXiv Detail & Related papers (2024-09-26T08:09:15Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - A critical review of methods and challenges in large language models [6.850038413666062]
Review provides in-depth analysis of Large Language Models (LLMs)<n>Examines the evolution from Recurrent Neural Networks (RNNs) to Transformer models.<n>Describes state-of-the-art techniques such as in-context learning and various fine-tuning approaches.
arXiv Detail & Related papers (2024-04-18T08:01:20Z) - Large Language Models for Education: A Survey and Outlook [69.02214694865229]
We systematically review the technological advancements in each perspective, organize related datasets and benchmarks, and identify the risks and challenges associated with deploying LLMs in education.
Our survey aims to provide a comprehensive technological picture for educators, researchers, and policymakers to harness the power of LLMs to revolutionize educational practices and foster a more effective personalized learning environment.
arXiv Detail & Related papers (2024-03-26T21:04:29Z) - ChatGPT Alternative Solutions: Large Language Models Survey [0.0]
Large Language Models (LLMs) have ignited a surge in research contributions within this domain.
Recent years have witnessed a dynamic synergy between academia and industry, propelling the field of LLM research to new heights.
This survey furnishes a well-rounded perspective on the current state of generative AI, shedding light on opportunities for further exploration, enhancement, and innovation.
arXiv Detail & Related papers (2024-03-21T15:16:50Z) - Advancing Graph Representation Learning with Large Language Models: A
Comprehensive Survey of Techniques [37.60727548905253]
The integration of Large Language Models (LLMs) with Graph Representation Learning (GRL) marks a significant evolution in analyzing complex data structures.
This collaboration harnesses the sophisticated linguistic capabilities of LLMs to improve the contextual understanding and adaptability of graph models.
Despite a growing body of research dedicated to integrating LLMs into the graph domain, a comprehensive review that deeply analyzes the core components and operations is notably lacking.
arXiv Detail & Related papers (2024-02-04T05:51:14Z) - Scientific Large Language Models: A Survey on Biological & Chemical Domains [47.97810890521825]
Large Language Models (LLMs) have emerged as a transformative power in enhancing natural language comprehension.
The application of LLMs extends beyond conventional linguistic boundaries, encompassing specialized linguistic systems developed within various scientific disciplines.
As a burgeoning area in the community of AI for Science, scientific LLMs warrant comprehensive exploration.
arXiv Detail & Related papers (2024-01-26T05:33:34Z) - Large Language Models for Generative Information Extraction: A Survey [89.71273968283616]
Large Language Models (LLMs) have demonstrated remarkable capabilities in text understanding and generation.
We present an extensive overview by categorizing these works in terms of various IE subtasks and techniques.
We empirically analyze the most advanced methods and discover the emerging trend of IE tasks with LLMs.
arXiv Detail & Related papers (2023-12-29T14:25:22Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.