Learning to Collide: Recommendation System Model Compression with
Learned Hash Functions
- URL: http://arxiv.org/abs/2203.15837v1
- Date: Mon, 28 Mar 2022 06:07:30 GMT
- Title: Learning to Collide: Recommendation System Model Compression with
Learned Hash Functions
- Authors: Benjamin Ghaemmaghami, Mustafa Ozdal, Rakesh Komuravelli, Dmitriy
Korchev, Dheevatsa Mudigere, Krishnakumar Nair, Maxim Naumov
- Abstract summary: A key characteristic of deep recommendation models is the immense memory requirements of their embedding tables.
A common technique to reduce model size is to hash all of the categorical variable identifiers (ids) into a smaller space.
This hashing reduces the number of unique representations that must be stored in the embedding table; thus decreasing its size.
We introduce an alternative approach, Learned Hash Functions, which instead learns a new mapping function that encourages collisions between semantically similar ids.
- Score: 4.6994057182972595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A key characteristic of deep recommendation models is the immense memory
requirements of their embedding tables. These embedding tables can often reach
hundreds of gigabytes which increases hardware requirements and training cost.
A common technique to reduce model size is to hash all of the categorical
variable identifiers (ids) into a smaller space. This hashing reduces the
number of unique representations that must be stored in the embedding table;
thus decreasing its size. However, this approach introduces collisions between
semantically dissimilar ids that degrade model quality. We introduce an
alternative approach, Learned Hash Functions, which instead learns a new
mapping function that encourages collisions between semantically similar ids.
We derive this learned mapping from historical data and embedding access
patterns. We experiment with this technique on a production model and find that
a mapping informed by the combination of access frequency and a learned low
dimension embedding is the most effective. We demonstrate a small improvement
relative to the hashing trick and other collision related compression
techniques. This is ongoing work that explores the impact of categorical id
collisions on recommendation model quality and how those collisions may be
controlled to improve model performance.
Related papers
- FastFill: Efficient Compatible Model Update [40.27741553705222]
FastFill is a compatible model update process using feature alignment and policy based partial backfilling.
We show that previous backfilling strategies suffer from decreased performance and demonstrate the importance of both the training objective and the ordering in online partial backfilling.
arXiv Detail & Related papers (2023-03-08T18:03:51Z) - Prototype-Based Layered Federated Cross-Modal Hashing [14.844848099134648]
In this paper, we propose a novel method called prototype-based layered federated cross-modal hashing.
Specifically, the prototype is introduced to learn the similarity between instances and classes on server.
To realize personalized federated learning, a hypernetwork is deployed on server to dynamically update different layers' weights of local model.
arXiv Detail & Related papers (2022-10-27T15:11:12Z) - Efficient model compression with Random Operation Access Specific Tile
(ROAST) hashing [35.67591281350068]
This paper proposes a model-agnostic, cache-friendly model compression approach: Random Operation Access Specific Tile (ROAST) hashing.
With ROAST, we present the first compressed BERT, which is $100times - 1000times$ smaller but does not result in quality degradation.
These compression levels on universal architecture like transformers are promising for the future of SOTA model deployment on resource-constrained devices like mobile and edge devices.
arXiv Detail & Related papers (2022-07-21T18:31:17Z) - PRANC: Pseudo RAndom Networks for Compacting deep models [22.793523211040682]
PRANC enables significant compaction of a deep model.
In this study, we employ PRANC to condense image classification models and compress images by compacting their associated implicit neural networks.
arXiv Detail & Related papers (2022-06-16T22:03:35Z) - A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
Learning [56.450090618578]
Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement.
We show that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work.
We propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel.
arXiv Detail & Related papers (2022-05-26T08:24:01Z) - One Loss for All: Deep Hashing with a Single Cosine Similarity based
Learning Objective [86.48094395282546]
A deep hashing model typically has two main learning objectives: to make the learned binary hash codes discriminative and to minimize a quantization error.
We propose a novel deep hashing model with only a single learning objective.
Our model is highly effective, outperforming the state-of-the-art multi-loss hashing models on three large-scale instance retrieval benchmarks.
arXiv Detail & Related papers (2021-09-29T14:27:51Z) - Deep Self-Adaptive Hashing for Image Retrieval [16.768754022585057]
We propose a textbfDeep Self-Adaptive Hashing(DSAH) model to adaptively capture the semantic information with two special designs.
First, we construct a neighborhood-based similarity matrix, and then refine this initial similarity matrix with a novel update strategy.
Secondly, we measure the priorities of data pairs with PIC and assign adaptive weights to them, which is relies on the assumption that more dissimilar data pairs contain more discriminative information for hash learning.
arXiv Detail & Related papers (2021-08-16T13:53:20Z) - Compatibility-aware Heterogeneous Visual Search [93.90831195353333]
Existing systems use the same embedding model to compute representations (embeddings) for the query and gallery images.
We address two forms of compatibility: One enforced by modifying the parameters of each model that computes the embeddings, the other by modifying the architectures that compute the embeddings.
Compared to ordinary (homogeneous) visual search using the largest embedding model (paragon), CMP-NAS achieves 80-fold and 23-fold cost reduction.
arXiv Detail & Related papers (2021-05-13T02:30:50Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - DecAug: Augmenting HOI Detection via Decomposition [54.65572599920679]
Current algorithms suffer from insufficient training samples and category imbalance within datasets.
We propose an efficient and effective data augmentation method called DecAug for HOI detection.
Experiments show that our method brings up to 3.3 mAP and 1.6 mAP improvements on V-COCO and HICODET dataset.
arXiv Detail & Related papers (2020-10-02T13:59:05Z) - Generative Semantic Hashing Enhanced via Boltzmann Machines [61.688380278649056]
Existing generative-hashing methods mostly assume a factorized form for the posterior distribution.
We propose to employ the distribution of Boltzmann machine as the retrievalal posterior.
We show that by effectively modeling correlations among different bits within a hash code, our model can achieve significant performance gains.
arXiv Detail & Related papers (2020-06-16T01:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.