The Long Tail of Context: Does it Exist and Matter?
- URL: http://arxiv.org/abs/2210.01023v1
- Date: Mon, 3 Oct 2022 15:39:33 GMT
- Title: The Long Tail of Context: Does it Exist and Matter?
- Authors: Konstantin Bauman, Alexey Vasilev, Alexander Tuzhilin
- Abstract summary: Context has been an important topic in recommender systems over the past two decades.
Some recommender systems applications deal with a much bigger and broader types of contexts.
In this paper, we study such context-rich'' applications dealing with a large variety of different types of contexts.
- Score: 74.05842462244705
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Context has been an important topic in recommender systems over the past two
decades. A standard representational approach to context assumes that
contextual variables and their structures are known in an application. Most of
the prior CARS papers following representational approach manually selected and
considered only a few crucial contextual variables in an application, such as
time, location, and company of a person. This prior work demonstrated
significant recommendation performance improvements when various CARS-based
methods have been deployed in numerous applications. However, some recommender
systems applications deal with a much bigger and broader types of contexts, and
manually identifying and capturing a few contextual variables is not sufficient
in such cases. In this paper, we study such ``context-rich'' applications
dealing with a large variety of different types of contexts. We demonstrate
that supporting only a few most important contextual variables, although
useful, is not sufficient. In our study, we focus on the application that
recommends various banking products to commercial customers within the context
of dialogues initiated by customer service representatives. In this
application, we managed to identify over two hundred types of contextual
variables. Sorting those variables by their importance forms the Long Tail of
Context (LTC). In this paper, we empirically demonstrate that LTC matters and
using all these contextual variables from the Long Tail leads to significant
improvements in recommendation performance.
Related papers
- Leveraging Long-Context Large Language Models for Multi-Document Understanding and Summarization in Enterprise Applications [1.1682259692399921]
Long-context Large Language Models (LLMs) can grasp extensive connections, provide cohesive summaries, and adapt to various industry domains.
Case studies show notable enhancements in both efficiency and accuracy.
arXiv Detail & Related papers (2024-09-27T05:29:31Z) - Why does in-context learning fail sometimes? Evaluating in-context learning on open and closed questions [14.999106867218572]
We measure the performance of in-context learning as a function of task novelty and difficulty for open and closed questions.
We show that counter-intuitively, a context that is more aligned with the topic does not always help more than a less relevant context.
arXiv Detail & Related papers (2024-07-02T07:52:30Z) - Is It Really Long Context if All You Need Is Retrieval? Towards Genuinely Difficult Long Context NLP [32.19010113355365]
We argue that conflating different tasks by their context length is unproductive.
We propose to unpack the taxonomy of long-context based on the properties that make them more difficult with longer contexts.
We conclude that the most difficult and interesting settings, whose necessary information is very long and highly diffused within the input, is severely under-explored.
arXiv Detail & Related papers (2024-06-29T11:09:47Z) - Code-Switched Language Identification is Harder Than You Think [69.63439391717691]
Code switching is a common phenomenon in written and spoken communication.
We look at the application of building CS corpora.
We make the task more realistic by scaling it to more languages.
We reformulate the task as a sentence-level multi-label tagging problem to make it more tractable.
arXiv Detail & Related papers (2024-02-02T15:38:47Z) - CRUD-RAG: A Comprehensive Chinese Benchmark for Retrieval-Augmented Generation of Large Language Models [49.16989035566899]
Retrieval-Augmented Generation (RAG) is a technique that enhances the capabilities of large language models (LLMs) by incorporating external knowledge sources.
This paper constructs a large-scale and more comprehensive benchmark, and evaluates all the components of RAG systems in various RAG application scenarios.
arXiv Detail & Related papers (2024-01-30T14:25:32Z) - Knowledge-Augmented Large Language Models for Personalized Contextual
Query Suggestion [16.563311988191636]
We construct an entity-centric knowledge store for each user based on their search and browsing activities on the web.
This knowledge store is light-weight, since it only produces user-specific aggregate projections of interests and knowledge onto public knowledge graphs.
arXiv Detail & Related papers (2023-11-10T01:18:47Z) - How Can Context Help? Exploring Joint Retrieval of Passage and
Personalized Context [39.334509280777425]
Motivated by the concept of personalized context-aware document-grounded conversational systems, we introduce the task of context-aware passage retrieval.
We propose a novel approach, Personalized Context-Aware Search (PCAS), that effectively harnesses contextual information during passage retrieval.
arXiv Detail & Related papers (2023-08-26T04:49:46Z) - Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep
Learning [66.59455427102152]
We introduce Uncertainty Baselines: high-quality implementations of standard and state-of-the-art deep learning methods on a variety of tasks.
Each baseline is a self-contained experiment pipeline with easily reusable and extendable components.
We provide model checkpoints, experiment outputs as Python notebooks, and leaderboards for comparing results.
arXiv Detail & Related papers (2021-06-07T23:57:32Z) - Retrieval-Free Knowledge-Grounded Dialogue Response Generation with
Adapters [52.725200145600624]
We propose KnowExpert to bypass the retrieval process by injecting prior knowledge into the pre-trained language models with lightweight adapters.
Experimental results show that KnowExpert performs comparably with the retrieval-based baselines.
arXiv Detail & Related papers (2021-05-13T12:33:23Z) - Larger-Context Tagging: When and Why Does It Work? [55.407651696813396]
We focus on investigating when and why the larger-context training, as a general strategy, can work.
We set up a testbed based on four tagging tasks and thirteen datasets.
arXiv Detail & Related papers (2021-04-09T15:35:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.