Multi-Layer Ranking with Large Language Models for News Source Recommendation
- URL: http://arxiv.org/abs/2406.11745v1
- Date: Mon, 17 Jun 2024 17:02:34 GMT
- Title: Multi-Layer Ranking with Large Language Models for News Source Recommendation
- Authors: Wenjia Zhang, Lin Gui, Rob Procter, Yulan He,
- Abstract summary: We build a novel dataset, called NewsQuote, consisting of 23,571 quote-speaker pairs sourced from a collection of news articles.
We formulate the recommendation task as the retrieval of experts based on their likelihood of being associated with a given query.
Our results show that employing an in-context learning based LLM ranker and a multi-layer ranking-based filter significantly improve both the predictive quality and behavioural quality of the recommender system.
- Score: 20.069181633869093
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To seek reliable information sources for news events, we introduce a novel task of expert recommendation, which aims to identify trustworthy sources based on their previously quoted statements. To achieve this, we built a novel dataset, called NewsQuote, consisting of 23,571 quote-speaker pairs sourced from a collection of news articles. We formulate the recommendation task as the retrieval of experts based on their likelihood of being associated with a given query. We also propose a multi-layer ranking framework employing Large Language Models to improve the recommendation performance. Our results show that employing an in-context learning based LLM ranker and a multi-layer ranking-based filter significantly improve both the predictive quality and behavioural quality of the recommender system.
Related papers
- CherryRec: Enhancing News Recommendation Quality via LLM-driven Framework [4.4206696279087]
We propose a framework for news recommendation using Large Language Models (LLMs) named textitCherryRec.
CherryRec ensures the quality of recommendations while accelerating the recommendation process.
We validate the effectiveness of the proposed framework by comparing it with state-of-the-art baseline methods on benchmark datasets.
arXiv Detail & Related papers (2024-06-18T03:33:38Z) - Enhancing Recommendation Diversity by Re-ranking with Large Language Models [0.27624021966289597]
Large Language Models (LLMs) can be used for diversity re-ranking.
LLMs exhibit improved performance on many natural language processing and recommendation tasks.
arXiv Detail & Related papers (2024-01-21T14:33:52Z) - LlamaRec: Two-Stage Recommendation using Large Language Models for
Ranking [10.671747198171136]
We propose a two-stage framework using large language models for ranking-based recommendation (LlamaRec)
In particular, we use small-scale sequential recommenders to retrieve candidates based on the user interaction history.
LlamaRec consistently achieves datasets superior performance in both recommendation performance and efficiency.
arXiv Detail & Related papers (2023-10-25T06:23:48Z) - LLMRec: Benchmarking Large Language Models on Recommendation Task [54.48899723591296]
The application of Large Language Models (LLMs) in the recommendation domain has not been thoroughly investigated.
We benchmark several popular off-the-shelf LLMs on five recommendation tasks, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization.
The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation.
arXiv Detail & Related papers (2023-08-23T16:32:54Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Zero-Shot Listwise Document Reranking with a Large Language Model [58.64141622176841]
We propose Listwise Reranker with a Large Language Model (LRL), which achieves strong reranking effectiveness without using any task-specific training data.
Experiments on three TREC web search datasets demonstrate that LRL not only outperforms zero-shot pointwise methods when reranking first-stage retrieval results, but can also act as a final-stage reranker.
arXiv Detail & Related papers (2023-05-03T14:45:34Z) - Two-Stage Neural Contextual Bandits for Personalised News Recommendation [50.3750507789989]
Existing personalised news recommendation methods focus on exploiting user interests and ignores exploration in recommendation.
We build on contextual bandits recommendation strategies which naturally address the exploitation-exploration trade-off.
We use deep learning representations for users and news, and generalise the neural upper confidence bound (UCB) policies to generalised additive UCB and bilinear UCB.
arXiv Detail & Related papers (2022-06-26T12:07:56Z) - Quality-aware News Recommendation [92.67156911466397]
existing news recommendation methods mainly aim to optimize news clicks while ignoring the quality of news they recommended.
We propose a quality-aware news recommendation method named QualityRec that can effectively improve the quality of recommended news.
arXiv Detail & Related papers (2022-02-28T08:25:58Z) - Context-Based Quotation Recommendation [60.93257124507105]
We propose a novel context-aware quote recommendation system.
It generates a ranked list of quotable paragraphs and spans of tokens from a given source document.
We conduct experiments on a collection of speech transcripts and associated news articles.
arXiv Detail & Related papers (2020-05-17T17:49:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.