LLMTaxo: Leveraging Large Language Models for Constructing Taxonomy of Factual Claims from Social Media
- URL: http://arxiv.org/abs/2504.12325v1
- Date: Fri, 11 Apr 2025 18:43:16 GMT
- Title: LLMTaxo: Leveraging Large Language Models for Constructing Taxonomy of Factual Claims from Social Media
- Authors: Haiqi Zhang, Zhengyuan Zhu, Zeyu Zhang, Chengkai Li,
- Abstract summary: This paper introduces LLMTaxo, a novel framework for the automated construction of taxonomy of factual claims from social media.<n>We implement this framework with different models across three distinct datasets and introduce specially designed taxonomy evaluation metrics.<n>With the evaluations from both human evaluators and GPT-4, the results indicate that LLMTaxo effectively categorizes factual claims from social media.
- Score: 4.808928577524327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the vast expansion of content on social media platforms, analyzing and comprehending online discourse has become increasingly complex. This paper introduces LLMTaxo, a novel framework leveraging large language models for the automated construction of taxonomy of factual claims from social media by generating topics from multi-level granularities. This approach aids stakeholders in more effectively navigating the social media landscapes. We implement this framework with different models across three distinct datasets and introduce specially designed taxonomy evaluation metrics for a comprehensive assessment. With the evaluations from both human evaluators and GPT-4, the results indicate that LLMTaxo effectively categorizes factual claims from social media, and reveals that certain models perform better on specific datasets.
Related papers
- Leveraging Large Language Models for Automated Definition Extraction with TaxoMatic A Case Study on Media Bias [6.581006293385288]
TaxoMatic is a framework that automates definition extraction from academic literature.<n>Analysed on a dataset of 2,398 manually rated articles, Claude-3-sonnet achieved the best results in both relevance classification and definition extraction.
arXiv Detail & Related papers (2025-04-01T01:47:16Z) - Graph Foundation Models for Recommendation: A Comprehensive Survey [55.70529188101446]
Large language models (LLMs) are designed to process and comprehend natural language, making both approaches highly effective and widely adopted.
Recent research has focused on graph foundation models (GFMs)
GFMs integrate the strengths of GNNs and LLMs to model complex RS problems more efficiently by leveraging the graph-based structure of user-item relationships alongside textual understanding.
arXiv Detail & Related papers (2025-02-12T12:13:51Z) - Bridging the Evaluation Gap: Leveraging Large Language Models for Topic Model Evaluation [0.0]
This study presents a framework for automated evaluation of dynamically evolving topic in scientific literature using Large Language Models (LLMs)<n>The proposed approach harnesses LLMs to measure key quality dimensions, such as coherence, repetitiveness, diversity, and topic-document alignment, without heavy reliance on expert annotators or narrow statistical metrics.
arXiv Detail & Related papers (2025-02-11T08:23:56Z) - Personalized Multimodal Large Language Models: A Survey [127.9521218125761]
Multimodal Large Language Models (MLLMs) have become increasingly important due to their state-of-the-art performance and ability to integrate multiple data modalities.<n>This paper presents a comprehensive survey on personalized multimodal large language models, focusing on their architecture, training methods, and applications.
arXiv Detail & Related papers (2024-12-03T03:59:03Z) - A Survey of Small Language Models [104.80308007044634]
Small Language Models (SLMs) have become increasingly important due to their efficiency and performance to perform various language tasks with minimal computational resources.
We present a comprehensive survey on SLMs, focusing on their architectures, training techniques, and model compression techniques.
arXiv Detail & Related papers (2024-10-25T23:52:28Z) - Towards Realistic Synthetic User-Generated Content: A Scaffolding Approach to Generating Online Discussions [17.96479268328824]
We investigate the feasibility of creating realistic, large-scale synthetic datasets of user-generated content.
We propose a multi-step generation process, predicated on the idea of creating compact representations of discussion threads.
arXiv Detail & Related papers (2024-08-15T18:43:50Z) - Analysis of Multidomain Abstractive Summarization Using Salience
Allocation [2.6880540371111445]
Season is a model designed to enhance summarization by leveraging salience allocation techniques.
This paper employs various evaluation metrics such as ROUGE, METEOR, BERTScore, and MoverScore to evaluate the performance of these models fine-tuned for generating abstractive summaries.
arXiv Detail & Related papers (2024-02-19T08:52:12Z) - Contextualization Distillation from Large Language Model for Knowledge
Graph Completion [51.126166442122546]
We introduce the Contextualization Distillation strategy, a plug-in-and-play approach compatible with both discriminative and generative KGC frameworks.
Our method begins by instructing large language models to transform compact, structural triplets into context-rich segments.
Comprehensive evaluations across diverse datasets and KGC techniques highlight the efficacy and adaptability of our approach.
arXiv Detail & Related papers (2024-01-28T08:56:49Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - ConvoSumm: Conversation Summarization Benchmark and Improved Abstractive
Summarization with Argument Mining [61.82562838486632]
We crowdsource four new datasets on diverse online conversation forms of news comments, discussion forums, community question answering forums, and email threads.
We benchmark state-of-the-art models on our datasets and analyze characteristics associated with the data.
arXiv Detail & Related papers (2021-06-01T22:17:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.