On the Overlooked Significance of Underutilized Contextual Features in
Recent News Recommendation Models
- URL: http://arxiv.org/abs/2112.14370v1
- Date: Wed, 29 Dec 2021 02:47:56 GMT
- Title: On the Overlooked Significance of Underutilized Contextual Features in
Recent News Recommendation Models
- Authors: Sungmin Cho, Hongjun Lim, Keunchan Park, Sungjoo Yoo, Eunhyeok Park
- Abstract summary: We show that the articles' contextual features, such as click-through-rate, popularity, or freshness, were either neglected or underutilized recently.
We design a purposefully simple contextual module that can boost the previous news recommendation models by a large margin.
- Score: 14.40821643757877
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized news recommendation aims to provide attractive articles for
readers by predicting their likelihood of clicking on a certain article. To
accurately predict this probability, plenty of studies have been proposed that
actively utilize content features of articles, such as words, categories, or
entities. However, we observed that the articles' contextual features, such as
CTR (click-through-rate), popularity, or freshness, were either neglected or
underutilized recently. To prove that this is the case, we conducted an
extensive comparison between recent deep-learning models and naive contextual
models that we devised and surprisingly discovered that the latter easily
outperforms the former. Furthermore, our analysis showed that the recent
tendency to apply overly sophisticated deep-learning operations to contextual
features was actually hindering the recommendation performance. From this
knowledge, we design a purposefully simple contextual module that can boost the
previous news recommendation models by a large margin.
Related papers
- On Debiasing Text Embeddings Through Context Injection [0.0]
We conduct a review of 19 embedding models by quantifying their biases and how well they respond to context injection.
We show that higher performing models are more prone to capturing biases, but are also better at incorporating context.
In a retrieval task, we show that biases in embeddings can lead to non-desirable outcomes.
arXiv Detail & Related papers (2024-10-14T18:11:53Z) - From Words to Worth: Newborn Article Impact Prediction with LLM [69.41680520058418]
This paper introduces a promising approach, leveraging the capabilities of fine-tuned LLMs to predict the future impact of newborn articles.
A comprehensive dataset has been constructed and released for fine-tuning the LLM, containing over 12,000 entries with corresponding titles, abstracts, and TNCSI_SP.
arXiv Detail & Related papers (2024-08-07T17:52:02Z) - AutoCast++: Enhancing World Event Prediction with Zero-shot Ranking-based Context Retrieval [9.357912396498142]
We introduce AutoCast++, a zero-shot ranking-based context retrieval system.
Our approach first re-ranks articles based on zero-shot question-passage relevance, honing in on semantically pertinent news.
We conduct both the relevance evaluation and article summarization without needing domain-specific training.
arXiv Detail & Related papers (2023-10-03T08:34:44Z) - Attentive Graph-based Text-aware Preference Modeling for Top-N
Recommendation [2.3991565023534083]
We propose a new model named Attentive Graph-based Text-aware Recommendation Model (AGTM)
In this work, we aim to further improve top-N recommendation by effectively modeling both item textual content and high-order connectivity in user-item graph.
arXiv Detail & Related papers (2023-05-22T12:32:06Z) - Fairness-guided Few-shot Prompting for Large Language Models [93.05624064699965]
In-context learning can suffer from high instability due to variations in training examples, example order, and prompt formats.
We introduce a metric to evaluate the predictive bias of a fixed prompt against labels or a given attributes.
We propose a novel search strategy based on the greedy search to identify the near-optimal prompt for improving the performance of in-context learning.
arXiv Detail & Related papers (2023-03-23T12:28:25Z) - Entity Disambiguation with Entity Definitions [50.01142092276296]
Local models have recently attained astounding performances in Entity Disambiguation (ED)
Previous works limited their studies to using, as the textual representation of each candidate, only its Wikipedia title.
In this paper, we address this limitation and investigate to what extent more expressive textual representations can mitigate it.
We report a new state of the art on 2 out of 6 benchmarks we consider and strongly improve the generalization capability over unseen patterns.
arXiv Detail & Related papers (2022-10-11T17:46:28Z) - Aspect-driven User Preference and News Representation Learning for News
Recommendation [9.187076140490902]
News recommender systems usually learn topic-level representations of users and news for recommendation.
We propose a novel Aspect-driven News Recommender System (ANRS) built on aspect-level user preference and news representation learning.
arXiv Detail & Related papers (2021-10-12T07:38:54Z) - Why Do We Click: Visual Impression-aware News Recommendation [108.73539346064386]
This work is inspired by the fact that users make their click decisions mostly based on the visual impression they perceive when browsing news.
We propose to capture such visual impression information with visual-semantic modeling for news recommendation.
In addition, we inspect the impression from a global view and take structural information, such as the arrangement of different fields and spatial position of different words on the impression.
arXiv Detail & Related papers (2021-09-26T16:58:14Z) - Context-Based Quotation Recommendation [60.93257124507105]
We propose a novel context-aware quote recommendation system.
It generates a ranked list of quotable paragraphs and spans of tokens from a given source document.
We conduct experiments on a collection of speech transcripts and associated news articles.
arXiv Detail & Related papers (2020-05-17T17:49:53Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.