On the Sparsity of Neural Machine Translation Models
- URL: http://arxiv.org/abs/2010.02646v1
- Date: Tue, 6 Oct 2020 11:47:20 GMT
- Title: On the Sparsity of Neural Machine Translation Models
- Authors: Yong Wang, Longyue Wang, Victor O.K. Li, Zhaopeng Tu
- Abstract summary: We investigate whether redundant parameters can be reused to achieve better performance.
Experiments and analyses are systematically conducted on different datasets and NMT architectures.
- Score: 65.49762428553345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern neural machine translation (NMT) models employ a large number of
parameters, which leads to serious over-parameterization and typically causes
the underutilization of computational resources. In response to this problem,
we empirically investigate whether the redundant parameters can be reused to
achieve better performance. Experiments and analyses are systematically
conducted on different datasets and NMT architectures. We show that: 1) the
pruned parameters can be rejuvenated to improve the baseline model by up to
+0.8 BLEU points; 2) the rejuvenated parameters are reallocated to enhance the
ability of modeling low-level lexical information.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Let's Focus on Neuron: Neuron-Level Supervised Fine-tuning for Large Language Model [43.107778640669544]
Large Language Models (LLMs) are composed of neurons that exhibit various behaviors and roles.
Recent studies have revealed that not all neurons are active across different datasets.
We introduce Neuron-Level Fine-Tuning (NeFT), a novel approach that refines the granularity of parameter training down to the individual neuron.
arXiv Detail & Related papers (2024-03-18T09:55:01Z) - Deep Learning for Fast Inference of Mechanistic Models' Parameters [0.28675177318965045]
We propose using Deep Neural Networks (NN) for directly predicting parameters of mechanistic models given observations.
We consider a training procedure that combines Neural Networks and mechanistic models.
We find that, while Neural Network estimates are slightly improved by further fitting, these estimates are measurably better than the fitting procedure alone.
arXiv Detail & Related papers (2023-12-05T22:16:54Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - On the Influence of Enforcing Model Identifiability on Learning dynamics
of Gaussian Mixture Models [14.759688428864159]
We propose a technique for extracting submodels from singular models.
Our method enforces model identifiability during training.
We show how the method can be applied to more complex models like deep neural networks.
arXiv Detail & Related papers (2022-06-17T07:50:22Z) - MoEfication: Conditional Computation of Transformer Models for Efficient
Inference [66.56994436947441]
Transformer-based pre-trained language models can achieve superior performance on most NLP tasks due to large parameter capacity, but also lead to huge computation cost.
We explore to accelerate large-model inference by conditional computation based on the sparse activation phenomenon.
We propose to transform a large model into its mixture-of-experts (MoE) version with equal model size, namely MoEfication.
arXiv Detail & Related papers (2021-10-05T02:14:38Z) - Real-time Forecast Models for TBM Load Parameters Based on Machine
Learning Methods [6.247628933072029]
In this paper, based on in-situ TBM operational data, we use the machine-learning (ML) methods to build the real-time forecast models for TBM load parameters.
To decrease the model complexity and improve the generalization, we also apply the least absolute shrinkage and selection (Lasso) method to extract the essential features of the forecast task.
arXiv Detail & Related papers (2021-04-12T07:31:39Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.