DynamicEmbedding: Extending TensorFlow for Colossal-Scale Applications
- URL: http://arxiv.org/abs/2004.08366v1
- Date: Fri, 17 Apr 2020 17:43:51 GMT
- Title: DynamicEmbedding: Extending TensorFlow for Colossal-Scale Applications
- Authors: Yun Zeng, Siqi Zuo, Dongcai Shen
- Abstract summary: One of the limitations of deep learning models with sparse features today stems from the predefined nature of their input.
We show that the resulting models are able to perform better and efficiently run at a much larger scale.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the limitations of deep learning models with sparse features today
stems from the predefined nature of their input, which requires a dictionary be
defined prior to the training. With this paper we propose both a theory and a
working system design which remove this limitation, and show that the resulting
models are able to perform better and efficiently run at a much larger scale.
Specifically, we achieve this by decoupling a model's content from its form to
tackle architecture evolution and memory growth separately. To efficiently
handle model growth, we propose a new neuron model, called DynamicCell, drawing
inspiration from from the free energy principle [15] to introduce the concept
of reaction to discharge non-digestive energy, which also subsumes gradient
descent based approaches as its special cases. We implement DynamicCell by
introducing a new server into TensorFlow to take over most of the work
involving model growth. Consequently, it enables any existing deep learning
models to efficiently handle arbitrary number of distinct sparse features
(e.g., search queries), and grow incessantly without redefining the model. Most
notably, one of our models, which has been reliably running in production for
over a year, is capable of suggesting high quality keywords for advertisers of
Google Smart Campaigns and achieved significant accuracy gains based on a
challenging metric -- evidence that data-driven, self-evolving systems can
potentially exceed the performance of traditional rule-based approaches.
Related papers
- Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning [38.09011520275557]
Class-incremental learning (CIL) aims to train a model to learn new classes from non-stationary data streams without forgetting old ones.
We propose a new kind of connectionist model by tailoring neural unit dynamics that adapt the behavior of neural networks for CIL.
arXiv Detail & Related papers (2024-06-04T15:47:03Z) - Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling [4.190836962132713]
This paper introduces Orchid, a novel architecture designed to address the quadratic complexity of traditional attention mechanisms.
At the core of this architecture lies a new data-dependent global convolution layer, which contextually adapts its conditioned kernel on input sequence.
We evaluate the proposed model across multiple domains, including language modeling and image classification, to highlight its performance and generality.
arXiv Detail & Related papers (2024-02-28T17:36:45Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Generative Learning of Continuous Data by Tensor Networks [45.49160369119449]
We introduce a new family of tensor network generative models for continuous data.
We benchmark the performance of this model on several synthetic and real-world datasets.
Our methods give important theoretical and empirical evidence of the efficacy of quantum-inspired methods for the rapidly growing field of generative learning.
arXiv Detail & Related papers (2023-10-31T14:37:37Z) - Your Autoregressive Generative Model Can be Better If You Treat It as an
Energy-Based One [83.5162421521224]
We propose a unique method termed E-ARM for training autoregressive generative models.
E-ARM takes advantage of a well-designed energy-based learning objective.
We show that E-ARM can be trained efficiently and is capable of alleviating the exposure bias problem.
arXiv Detail & Related papers (2022-06-26T10:58:41Z) - DST: Dynamic Substitute Training for Data-free Black-box Attack [79.61601742693713]
We propose a novel dynamic substitute training attack method to encourage substitute model to learn better and faster from the target model.
We introduce a task-driven graph-based structure information learning constrain to improve the quality of generated training data.
arXiv Detail & Related papers (2022-04-03T02:29:11Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Autoregressive Dynamics Models for Offline Policy Evaluation and
Optimization [60.73540999409032]
We show that expressive autoregressive dynamics models generate different dimensions of the next state and reward sequentially conditioned on previous dimensions.
We also show that autoregressive dynamics models are useful for offline policy optimization by serving as a way to enrich the replay buffer.
arXiv Detail & Related papers (2021-04-28T16:48:44Z) - Dynamic Model Pruning with Feedback [64.019079257231]
We propose a novel model compression method that generates a sparse trained model without additional overhead.
We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models.
arXiv Detail & Related papers (2020-06-12T15:07:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.