Citation Structural Diversity: A Novel and Concise Metric Combining Structure and Semantics for Literature Evaluation
- URL: http://arxiv.org/abs/2501.02429v1
- Date: Sun, 05 Jan 2025 03:24:37 GMT
- Title: Citation Structural Diversity: A Novel and Concise Metric Combining Structure and Semantics for Literature Evaluation
- Authors: Mingyue Kong, Yinglong Zhang, Likun Sheng, Kaifeng Hong,
- Abstract summary: The study examines the influence of the proposed model of citation structural diversity on citation volume and long-term academic impact.
The findings reveal that literature with higher citation structural diversity demonstrates notable advantages in both citation frequency and sustained academic influence.
- Score: 0.562479170374811
- License:
- Abstract: As academic research becomes increasingly diverse, traditional literature evaluation methods face significant limitations,particularly in capturing the complexity of academic dissemination and the multidimensional impacts of literature. To address these challenges, this paper introduces a novel literature evaluation model of citation structural diversity, with a focus on assessing its feasibility as an evaluation metric. By refining citation network and incorporating both ciation structural features and semantic information, the study examines the influence of the proposed model of citation structural diversity on citation volume and long-term academic impact. The findings reveal that literature with higher citation structural diversity demonstrates notable advantages in both citation frequency and sustained academic influence. Through data grouping and a decade-long citation trend analysis, the potential application of this model in literature evaluation is further validated. This research offers a fresh perspective on optimizing literature evaluation methods and emphasizes the distinct advantages of citation structural diversity in measuring interdisciplinarity.
Related papers
- Bridging the Evaluation Gap: Leveraging Large Language Models for Topic Model Evaluation [0.0]
This study presents a framework for automated evaluation of dynamically evolving topic in scientific literature using Large Language Models (LLMs)
The proposed approach harnesses LLMs to measure key quality dimensions, such as coherence, repetitiveness, diversity, and topic-document alignment, without heavy reliance on expert annotators or narrow statistical metrics.
arXiv Detail & Related papers (2025-02-11T08:23:56Z) - Why do you cite? An investigation on citation intents and decision-making classification processes [1.7812428873698407]
This study emphasizes the importance of trustfully classifying citation intents.
We present a study utilizing advanced Ensemble Strategies for Citation Intent Classification (CIC)
One of our models sets as a new state-of-the-art (SOTA) with an 89.46% Macro-F1 score on the SciCite benchmark.
arXiv Detail & Related papers (2024-07-18T09:29:33Z) - Time to Cite: Modeling Citation Networks using the Dynamic Impact
Single-Event Embedding Model [0.33123773366516646]
citation networks are a prominent example of single-event dynamic networks.
We propose a novel likelihood function for the characterization of such single-event networks.
The Dynamic Impact Single-Event Embedding model (DISEE) reconciles static latent distance network embedding approaches with classical dynamic impact assessments.
arXiv Detail & Related papers (2024-02-28T22:59:26Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [55.33653554387953]
Pattern Analysis and Machine Intelligence (PAMI) has led to numerous literature reviews aimed at collecting and fragmented information.
This paper presents a thorough analysis of these literature reviews within the PAMI field.
We try to address three core research questions: (1) What are the prevalent structural and statistical characteristics of PAMI literature reviews; (2) What strategies can researchers employ to efficiently navigate the growing corpus of reviews; and (3) What are the advantages and limitations of AI-generated reviews compared to human-authored ones.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - A Content-Based Novelty Measure for Scholarly Publications: A Proof of
Concept [9.148691357200216]
We introduce an information-theoretic measure of novelty in scholarly publications.
This measure quantifies the degree of'surprise' perceived by a language model that represents the word distribution of scholarly discourse.
arXiv Detail & Related papers (2024-01-08T03:14:24Z) - Multi-Dimensional Evaluation of Text Summarization with In-Context
Learning [79.02280189976562]
In this paper, we study the efficacy of large language models as multi-dimensional evaluators using in-context learning.
Our experiments show that in-context learning-based evaluators are competitive with learned evaluation frameworks for the task of text summarization.
We then analyze the effects of factors such as the selection and number of in-context examples on performance.
arXiv Detail & Related papers (2023-06-01T23:27:49Z) - Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis [89.04041100520881]
This research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image.
We develop a novel approach to synthesize the object-level, image-level, and sentence-level information for better reasoning between the same and different modalities.
arXiv Detail & Related papers (2023-05-25T15:26:13Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - Enhancing Identification of Structure Function of Academic Articles
Using Contextual Information [6.28532577139029]
This paper takes articles of the ACL conference as the corpus to identify the structure function of academic articles.
We employ the traditional machine learning models and deep learning models to construct the classifiers based on various feature input.
Inspired by (2), this paper introduces contextual information into the deep learning models and achieved significant results.
arXiv Detail & Related papers (2021-11-28T11:21:21Z) - A Survey on Text Classification: From Shallow to Deep Learning [83.47804123133719]
The last decade has seen a surge of research in this area due to the unprecedented success of deep learning.
This paper fills the gap by reviewing the state-of-the-art approaches from 1961 to 2021.
We create a taxonomy for text classification according to the text involved and the models used for feature extraction and classification.
arXiv Detail & Related papers (2020-08-02T00:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.