SECite: Analyzing and Summarizing Citations in Software Engineering Literature
- URL: http://arxiv.org/abs/2601.07939v1
- Date: Mon, 12 Jan 2026 19:10:01 GMT
- Title: SECite: Analyzing and Summarizing Citations in Software Engineering Literature
- Authors: Shireesh Reddy Pyreddy, Khaja Valli Pathan, Hasan Masum, Tarannum Shaila Zaman,
- Abstract summary: SECite is a novel approach for evaluating scholarly impact through sentiment analysis of citation contexts.<n>We develop a semi-automated pipeline to extract citations referencing nine research papers.<n>We apply advanced natural language processing (NLP) techniques with unsupervised machine learning to classify these citation statements as positive or negative.
- Score: 0.13999481573773073
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Identifying the strengths and limitations of a research paper is a core component of any literature review. However, traditional summaries reflect only the authors' self-presented perspective. Analyzing how other researchers discuss and cite the paper can offer a deeper, more practical understanding of its contributions and shortcomings. In this research, we introduce SECite, a novel approach for evaluating scholarly impact through sentiment analysis of citation contexts. We develop a semi-automated pipeline to extract citations referencing nine research papers and apply advanced natural language processing (NLP) techniques with unsupervised machine learning to classify these citation statements as positive or negative. Beyond sentiment classification, we use generative AI to produce sentiment-specific summaries that capture the strengths and limitations of each target paper, derived both from clustered citation groups and from the full text. Our findings reveal meaningful patterns in how the academic community perceives these works, highlighting areas of alignment and divergence between external citation feedback and the authors' own presentation. By integrating citation sentiment analysis with LLM-based summarization, this study provides a comprehensive framework for assessing scholarly contributions.
Related papers
- OpenNovelty: An LLM-powered Agentic System for Verifiable Scholarly Novelty Assessment [63.662126457336534]
OpenNovelty is an agentic system for transparent, evidence-based novelty analysis.<n>It grounds all assessments in retrieved real papers, ensuring verifiable judgments.<n>OpenNovelty aims to empower the research community with a scalable tool that promotes fair, consistent, and evidence-backed peer review.
arXiv Detail & Related papers (2026-01-04T15:48:51Z) - SemanticCite: Citation Verification with AI-Powered Full-Text Analysis and Evidence-Based Reasoning [0.0]
We introduce SemanticCite, an AI-powered system that verifies citation accuracy through full-text source analysis.<n>Our approach combines multiple retrieval methods with a four-class classification system that captures nuanced claim-source relationships.<n>We contribute a comprehensive dataset of over 1,000 citations with detailed alignments, functional classifications, semantic annotations, and bibliometric metadata.
arXiv Detail & Related papers (2025-11-20T10:05:21Z) - Trends and Challenges in Authorship Analysis: A Review of ML, DL, and LLM Approaches [1.8686807993563161]
Authorship analysis plays an important role in diverse domains, including forensic linguistics, academia, cybersecurity, and digital content authentication.<n>This paper presents a systematic literature review on two key sub-tasks of authorship analysis; Author Attribution and Author Verification.
arXiv Detail & Related papers (2025-05-21T12:06:08Z) - In-depth Research Impact Summarization through Fine-Grained Temporal Citation Analysis [52.42612945266194]
We propose a new task: generating nuanced, expressive, and time-aware impact summaries.<n>We show that these summaries capture both praise (confirmation citations) and critique (correction citations) through the evolution of fine-grained citation intents.
arXiv Detail & Related papers (2025-05-20T19:11:06Z) - CiteFusion: An Ensemble Framework for Citation Intent Classification Harnessing Dual-Model Binary Couples and SHAP Analyses [1.6695303704829412]
CiteFusion addresses the multi-class Citation Intent Classification task on two benchmark datasets: SciCite and ACL-ARC.<n>Experiments show that CiteFusion achieves state-of-the-art performance, with Macro-F1 scores of 89.60% on SciCite, and 76.24% on ACL-ARC.<n>We release a web-based application that classifies citation intents leveraging the CiteFusion models developed on SciCite.
arXiv Detail & Related papers (2024-07-18T09:29:33Z) - A Literature Review of Literature Reviews in Pattern Analysis and Machine Intelligence [51.26815896167173]
We present a comprehensive tertiary analysis of PAMI reviews along three complementary dimensions.<n>Our analyses reveal distinctive organizational patterns as well as persistent gaps in current review practices.<n>Finally, our evaluation of state-of-the-art AI-generated reviews indicates encouraging advances in coherence and organization.
arXiv Detail & Related papers (2024-02-20T11:28:50Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - What's New? Summarizing Contributions in Scientific Literature [85.95906677964815]
We introduce a new task of disentangled paper summarization, which seeks to generate separate summaries for the paper contributions and the context of the work.
We extend the S2ORC corpus of academic articles by adding disentangled "contribution" and "context" reference labels.
We propose a comprehensive automatic evaluation protocol which reports the relevance, novelty, and disentanglement of generated outputs.
arXiv Detail & Related papers (2020-11-06T02:23:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.