Learning Similarity Preserving Binary Codes for Recommender Systems
- URL: http://arxiv.org/abs/2204.08569v1
- Date: Mon, 18 Apr 2022 21:33:59 GMT
- Title: Learning Similarity Preserving Binary Codes for Recommender Systems
- Authors: Yang Shi and Young-joo Chung
- Abstract summary: We study an unexplored module combination for the hashing-based recommender systems, namely Compact Cross-Similarity Recommender (CCSR)
Inspired by cross-modal retrieval, CCSR utilizes a Posteriori similarity instead of matrix factorization and rating reconstruction to model interactions between users and items.
On the MovieLens1M dataset, the absolute performance improvements are up to 15.69% in NDCG and 4.29% in Recall.
- Score: 5.799838997511804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hashing-based Recommender Systems (RSs) are widely studied to provide
scalable services. The existing methods for the systems combine three modules
to achieve efficiency: feature extraction, interaction modeling, and
binarization. In this paper, we study an unexplored module combination for the
hashing-based recommender systems, namely Compact Cross-Similarity Recommender
(CCSR). Inspired by cross-modal retrieval, CCSR utilizes Maximum a Posteriori
similarity instead of matrix factorization and rating reconstruction to model
interactions between users and items. We conducted experiments on MovieLens1M,
Amazon product review, Ichiba purchase dataset and confirmed CCSR outperformed
the existing matrix factorization-based methods. On the Movielens1M dataset,
the absolute performance improvements are up to 15.69% in NDCG and 4.29% in
Recall. In addition, we extensively studied three binarization modules: $sign$,
scaled tanh, and sign-scaled tanh. The result demonstrated that although
differentiable scaled tanh is popular in recent discrete feature learning
literature, a huge performance drop occurs when outputs of scaled $tanh$ are
forced to be binary.
Related papers
- Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - DimeRec: A Unified Framework for Enhanced Sequential Recommendation via Generative Diffusion Models [39.49215596285211]
Sequential Recommendation (SR) plays a pivotal role in recommender systems by tailoring recommendations to user preferences based on their non-stationary historical interactions.
We propose a novel framework called DimeRec that combines a guidance extraction module (GEM) and a generative diffusion aggregation module (DAM)
Our numerical experiments demonstrate that DimeRec significantly outperforms established baseline methods across three publicly available datasets.
arXiv Detail & Related papers (2024-08-22T06:42:09Z) - Monte Carlo Tree Search Boosts Reasoning via Iterative Preference Learning [55.96599486604344]
We introduce an approach aimed at enhancing the reasoning capabilities of Large Language Models (LLMs) through an iterative preference learning process.
We use Monte Carlo Tree Search (MCTS) to iteratively collect preference data, utilizing its look-ahead ability to break down instance-level rewards into more granular step-level signals.
The proposed algorithm employs Direct Preference Optimization (DPO) to update the LLM policy using this newly generated step-level preference data.
arXiv Detail & Related papers (2024-05-01T11:10:24Z) - Continual Referring Expression Comprehension via Dual Modular
Memorization [133.46886428655426]
Referring Expression (REC) aims to localize an image region of a given object described by a natural-language expression.
Existing REC algorithms make a strong assumption that training data feeding into a model are given upfront, which degrades its practicality for real-world scenarios.
In this paper, we propose Continual Referring Expression (CREC), a new setting for REC, where a model is learning on a stream of incoming tasks.
In order to continuously improve the model on sequential tasks without forgetting prior learned knowledge and without repeatedly re-training from a scratch, we propose an effective baseline method named Dual Modular Memorization
arXiv Detail & Related papers (2023-11-25T02:58:51Z) - Binarized Spectral Compressive Imaging [59.18636040850608]
Existing deep learning models for hyperspectral image (HSI) reconstruction achieve good performance but require powerful hardwares with enormous memory and computational resources.
We propose a novel method, Binarized Spectral-Redistribution Network (BiSRNet)
BiSRNet is derived by using the proposed techniques to binarize the base model.
arXiv Detail & Related papers (2023-05-17T15:36:08Z) - UniASM: Binary Code Similarity Detection without Fine-tuning [0.8271859911016718]
We propose a novel transformer-based binary code embedding model named UniASM to learn representations of the binary functions.
In the real-world task of known vulnerability search, UniASM outperforms all the current baselines.
arXiv Detail & Related papers (2022-10-28T14:04:57Z) - Modality-Aware Triplet Hard Mining for Zero-shot Sketch-Based Image
Retrieval [51.42470171051007]
This paper tackles the Zero-Shot Sketch-Based Image Retrieval (ZS-SBIR) problem from the viewpoint of cross-modality metric learning.
By combining two fundamental learning approaches in DML, e.g., classification training and pairwise training, we set up a strong baseline for ZS-SBIR.
We show that Modality-Aware Triplet Hard Mining (MATHM) enhances the baseline with three types of pairwise learning.
arXiv Detail & Related papers (2021-12-15T08:36:44Z) - Revisiting SVD to generate powerful Node Embeddings for Recommendation
Systems [3.388509725285237]
We revisit the Singular Value Decomposition (SVD) of adjacency matrix for embedding generation of users and items.
We use a two-layer neural network on top of these embeddings to learn relevance between user-item pairs.
Inspired by the success of higher-order learning in GRL, we propose an extension of this method to include two-hop neighbors for SVD.
arXiv Detail & Related papers (2021-10-05T20:41:21Z) - SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval [11.38022203865326]
SPLADE model provides highly sparse representations and competitive results with respect to state-of-the-art dense and sparse approaches.
We modify the pooling mechanism, benchmark a model solely based on document expansion, and introduce models trained with distillation.
Overall, SPLADE is considerably improved with more than $9$% gains on NDCG@10 on TREC DL 2019, leading to state-of-the-art results on the BEIR benchmark.
arXiv Detail & Related papers (2021-09-21T10:43:42Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Few-shot Learning with LSSVM Base Learner and Transductive Modules [20.323443723115275]
We introduce multi-class least squares support vector machine as our base learner which obtains better generation than existing ones with less computational overhead.
We also propose two simple and effective transductive modules which modify the support set using the query samples.
Our model, denoted as FSLSTM, achieves state-of-the-art performance on miniImageNet and CIFAR-FS few-shot learning benchmarks.
arXiv Detail & Related papers (2020-09-12T13:16:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.