AutoShard: Automated Embedding Table Sharding for Recommender Systems
- URL: http://arxiv.org/abs/2208.06399v1
- Date: Fri, 12 Aug 2022 17:48:01 GMT
- Title: AutoShard: Automated Embedding Table Sharding for Recommender Systems
- Authors: Daochen Zha, Louis Feng, Bhargav Bhushanam, Dhruv Choudhary, Jade Nie,
Yuandong Tian, Jay Chae, Yinbin Ma, Arun Kejariwal, Xia Hu
- Abstract summary: We introduce our novel practice in Meta, namely AutoShard, which uses a neural cost model to directly predict the multi-table costs.
AutoShard can efficiently shard hundreds of tables in seconds.
Our algorithms have been deployed in Meta production environment.
- Score: 54.82606459574231
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Embedding learning is an important technique in deep recommendation models to
map categorical features to dense vectors. However, the embedding tables often
demand an extremely large number of parameters, which become the storage and
efficiency bottlenecks. Distributed training solutions have been adopted to
partition the embedding tables into multiple devices. However, the embedding
tables can easily lead to imbalances if not carefully partitioned. This is a
significant design challenge of distributed systems named embedding table
sharding, i.e., how we should partition the embedding tables to balance the
costs across devices, which is a non-trivial task because 1) it is hard to
efficiently and precisely measure the cost, and 2) the partition problem is
known to be NP-hard. In this work, we introduce our novel practice in Meta,
namely AutoShard, which uses a neural cost model to directly predict the
multi-table costs and leverages deep reinforcement learning to solve the
partition problem. Experimental results on an open-sourced large-scale
synthetic dataset and Meta's production dataset demonstrate the superiority of
AutoShard over the heuristics. Moreover, the learned policy of AutoShard can
transfer to sharding tasks with various numbers of tables and different ratios
of the unseen tables without any fine-tuning. Furthermore, AutoShard can
efficiently shard hundreds of tables in seconds. The effectiveness,
transferability, and efficiency of AutoShard make it desirable for production
use. Our algorithms have been deployed in Meta production environment. A
prototype is available at https://github.com/daochenzha/autoshard
Related papers
- Progressive Entropic Optimal Transport Solvers [33.821924561619895]
We propose a new class of EOT solvers (ProgOT) that can estimate both plans and transport maps.
We provide experimental evidence demonstrating that ProgOT is a faster and more robust alternative to standard solvers.
We also prove statistical consistency of our approach for estimating optimal transport maps.
arXiv Detail & Related papers (2024-06-07T16:33:08Z) - TAP4LLM: Table Provider on Sampling, Augmenting, and Packing Semi-structured Data for Large Language Model Reasoning [55.33939289989238]
We propose TAP4LLM as a versatile pre-processor suite for leveraging large language models (LLMs) in table-based tasks effectively.
It covers several distinct components: (1) table sampling to decompose large tables into manageable sub-tables based on query semantics, (2) table augmentation to enhance tables with additional knowledge from external sources or models, and (3) table packing & serialization to convert tables into various formats suitable for LLMs' understanding.
arXiv Detail & Related papers (2023-12-14T15:37:04Z) - Pre-train and Search: Efficient Embedding Table Sharding with
Pre-trained Neural Cost Models [56.65200574282804]
We propose a "pre-train, and search" paradigm for efficient sharding.
NeuroShard pre-trains neural cost models on augmented tables to cover various sharding scenarios.
NeuroShard significantly and consistently outperforms the state-of-the-art on the benchmark sharding dataset.
arXiv Detail & Related papers (2023-05-03T02:52:03Z) - The Tensor Data Platform: Towards an AI-centric Database System [6.519203713828565]
We make the case that it is time to do the same for AI -- but with a twist!
We claim that achieving a truly AI-centric database requires moving the engine, at its core, from a relational to a tensor abstraction.
This allows us to: (1) support multi-modal data processing such as images, videos, audio, text as well as relational; (2) leverage the wellspring of innovation in HW and runtimes for tensor computation; and (3) exploit automatic differentiation to enable a novel class of "trainable" queries that can learn to perform a task.
arXiv Detail & Related papers (2022-11-04T21:26:16Z) - DreamShard: Generalizable Embedding Table Placement for Recommender
Systems [62.444159500899566]
We present a reinforcement learning (RL) approach for embedding table placement.
DreamShard achieves the reasoning of operation fusion and generalizability.
Experiments show that DreamShard substantially outperforms the existing human expert and RNN-based strategies.
arXiv Detail & Related papers (2022-10-05T05:12:02Z) - OmniTab: Pretraining with Natural and Synthetic Data for Few-shot
Table-based Question Answering [106.73213656603453]
We develop a simple table-based QA model with minimal annotation effort.
We propose an omnivorous pretraining approach that consumes both natural and synthetic data.
arXiv Detail & Related papers (2022-07-08T01:23:45Z) - AutoDistil: Few-shot Task-agnostic Neural Architecture Search for
Distilling Large Language Models [121.22644352431199]
We use Neural Architecture Search (NAS) to automatically distill several compressed students with variable cost from a large model.
Current works train a single SuperLM consisting of millions ofworks with weight-sharing.
Experiments on GLUE benchmark against state-of-the-art KD and NAS methods demonstrate AutoDistil to outperform leading compression techniques.
arXiv Detail & Related papers (2022-01-29T06:13:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.