Sequential Modelling with Applications to Music Recommendation,
Fact-Checking, and Speed Reading
- URL: http://arxiv.org/abs/2109.06736v1
- Date: Sat, 11 Sep 2021 08:05:48 GMT
- Title: Sequential Modelling with Applications to Music Recommendation,
Fact-Checking, and Speed Reading
- Authors: Christian Hansen
- Abstract summary: This thesis makes methodological contributions and new investigations of sequential modelling for the specific application areas of systems that recommend music tracks to listeners and systems that process text semantics.
- Score: 4.434614653851092
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential modelling entails making sense of sequential data, which naturally
occurs in a wide array of domains. One example is systems that interact with
users, log user actions and behaviour, and make recommendations of items of
potential interest to users on the basis of their previous interactions. In
such cases, the sequential order of user interactions is often indicative of
what the user is interested in next. Similarly, for systems that automatically
infer the semantics of text, capturing the sequential order of words in a
sentence is essential, as even a slight re-ordering could significantly alter
its original meaning. This thesis makes methodological contributions and new
investigations of sequential modelling for the specific application areas of
systems that recommend music tracks to listeners and systems that process text
semantics in order to automatically fact-check claims, or "speed read" text for
efficient further classification. (Rest of abstract omitted due to arXiv
abstract limit)
Related papers
- CAST: Corpus-Aware Self-similarity Enhanced Topic modelling [16.562349140796115]
We introduce CAST: Corpus-Aware Self-similarity Enhanced Topic modelling, a novel topic modelling method.
We find self-similarity to be an effective metric to prevent functional words from acting as candidate topic words.
Our approach significantly enhances the coherence and diversity of generated topics, as well as the topic model's ability to handle noisy data.
arXiv Detail & Related papers (2024-10-19T15:27:11Z) - Does It Look Sequential? An Analysis of Datasets for Evaluation of Sequential Recommendations [0.8437187555622164]
Sequential recommender systems aim to use the order of interactions in a user's history to predict future interactions.
It is crucial to use datasets that exhibit a sequential structure to evaluate sequential recommenders properly.
We apply several methods based on the random shuffling of the user's sequence of interactions to assess the strength of sequential structure across 15 datasets.
arXiv Detail & Related papers (2024-08-21T21:40:07Z) - Sequential Recommendation on Temporal Proximities with Contrastive
Learning and Self-Attention [3.7182810519704095]
Sequential recommender systems identify user preferences from their past interactions to predict subsequent items optimally.
Recent models often neglect similarities in users' actions that occur implicitly among users during analogous timeframes.
We propose a sequential recommendation model called TemProxRec, which includes contrastive learning and self-attention methods to consider temporal proximities.
arXiv Detail & Related papers (2024-02-15T08:33:16Z) - Walking Down the Memory Maze: Beyond Context Limit through Interactive
Reading [63.93888816206071]
We introduce MemWalker, a method that processes the long context into a tree of summary nodes. Upon receiving a query, the model navigates this tree in search of relevant information, and responds once it gathers sufficient information.
We show that, beyond effective reading, MemWalker enhances explainability by highlighting the reasoning steps as it interactively reads the text; pinpointing the relevant text segments related to the query.
arXiv Detail & Related papers (2023-10-08T06:18:14Z) - Recommender Systems with Generative Retrieval [58.454606442670034]
We propose a novel generative retrieval approach, where the retrieval model autoregressively decodes the identifiers of the target candidates.
To that end, we create semantically meaningful of codewords to serve as a Semantic ID for each item.
We show that recommender systems trained with the proposed paradigm significantly outperform the current SOTA models on various datasets.
arXiv Detail & Related papers (2023-05-08T21:48:17Z) - Modeling Dynamic User Preference via Dictionary Learning for Sequential
Recommendation [133.8758914874593]
Capturing the dynamics in user preference is crucial to better predict user future behaviors because user preferences often drift over time.
Many existing recommendation algorithms -- including both shallow and deep ones -- often model such dynamics independently.
This paper considers the problem of embedding a user's sequential behavior into the latent space of user preferences.
arXiv Detail & Related papers (2022-04-02T03:23:46Z) - From Implicit to Explicit feedback: A deep neural network for modeling
sequential behaviours and long-short term preferences of online users [3.464871689508835]
Implicit and explicit feedback have different roles for a useful recommendation.
We go from the hypothesis that a user's preference at a time is a combination of long-term and short-term interests.
arXiv Detail & Related papers (2021-07-26T16:59:20Z) - Text Summarization with Latent Queries [60.468323530248945]
We introduce LaQSum, the first unified text summarization system that learns Latent Queries from documents for abstractive summarization with any existing query forms.
Under a deep generative framework, our system jointly optimize a latent query model and a conditional language model, allowing users to plug-and-play queries of any type at test time.
Our system robustly outperforms strong comparison systems across summarization benchmarks with different query types, document settings, and target domains.
arXiv Detail & Related papers (2021-05-31T21:14:58Z) - Dynamic Memory based Attention Network for Sequential Recommendation [79.5901228623551]
We propose a novel long sequential recommendation model called Dynamic Memory-based Attention Network (DMAN)
It segments the overall long behavior sequence into a series of sub-sequences, then trains the model and maintains a set of memory blocks to preserve long-term interests of users.
Based on the dynamic memory, the user's short-term and long-term interests can be explicitly extracted and combined for efficient joint recommendation.
arXiv Detail & Related papers (2021-02-18T11:08:54Z) - Sparse-Interest Network for Sequential Recommendation [78.83064567614656]
We propose a novel textbfSparse textbfInterest textbfNEtwork (SINE) for sequential recommendation.
Our sparse-interest module can adaptively infer a sparse set of concepts for each user from the large concept pool.
SINE can achieve substantial improvement over state-of-the-art methods.
arXiv Detail & Related papers (2021-02-18T11:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.