Exploring Classic and Neural Lexical Translation Models for Information
Retrieval: Interpretability, Effectiveness, and Efficiency Benefits
- URL: http://arxiv.org/abs/2102.06815v1
- Date: Fri, 12 Feb 2021 23:21:55 GMT
- Title: Exploring Classic and Neural Lexical Translation Models for Information
Retrieval: Interpretability, Effectiveness, and Efficiency Benefits
- Authors: Leonid Boytsov, Zico Kolter
- Abstract summary: We use the neural Model 1 as an aggregator layer applied to context-free or contextualized query/document embeddings.
We show that adding an interpretable neural Model 1 layer on top of BERT-based contextualized embeddings does not decrease accuracy and/or efficiency.
We produced best neural and non-neural runs on the MS MARCO document ranking leaderboard in late 2020.
- Score: 0.11421942894219898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the utility of the lexical translation model (IBM Model 1) for
English text retrieval, in particular, its neural variants that are trained
end-to-end. We use the neural Model1 as an aggregator layer applied to
context-free or contextualized query/document embeddings. This new approach to
design a neural ranking system has benefits for effectiveness, efficiency, and
interpretability. Specifically, we show that adding an interpretable neural
Model 1 layer on top of BERT-based contextualized embeddings (1) does not
decrease accuracy and/or efficiency; and (2) may overcome the limitation on the
maximum sequence length of existing BERT models. The context-free neural Model
1 is less effective than a BERT-based ranking model, but it can run efficiently
on a CPU (without expensive index-time precomputation or query-time operations
on large tensors). Using Model 1 we produced best neural and non-neural runs on
the MS MARCO document ranking leaderboard in late 2020.
Related papers
- Multi-label Text Classification using GloVe and Neural Network Models [0.27195102129094995]
Existing solutions include traditional machine learning and deep neural networks for predictions.
This paper proposes a method utilizing the bag-of-words model approach based on the GloVe model and the CNN-BiLSTM network.
The method achieves an accuracy rate of 87.26% on the test set and an F1 score of 0.8737, showcasing promising results.
arXiv Detail & Related papers (2023-10-25T01:30:26Z) - Mitigating Data Scarcity for Large Language Models [7.259279261659759]
In recent years, pretrained neural language models (PNLMs) have taken the field of natural language processing by storm.
Data scarcity are commonly found in specialized domains, such as medical, or in low-resource languages that are underexplored by AI research.
In this dissertation, we focus on mitigating data scarcity using data augmentation and neural ensemble learning techniques.
arXiv Detail & Related papers (2023-02-03T15:17:53Z) - A Natural Bias for Language Generation Models [31.44752136404971]
We show that we can endow standard neural language generation models with a separate module that reflects unigram frequency statistics as prior knowledge.
We use neural machine translation as a test bed for this simple technique and observe that it: (i) improves learning efficiency; (ii) achieves better overall performance; and perhaps most importantly: appears to disentangle strong frequency effects.
arXiv Detail & Related papers (2022-12-19T18:14:36Z) - MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided
Adaptation [68.30497162547768]
We propose MoEBERT, which uses a Mixture-of-Experts structure to increase model capacity and inference speed.
We validate the efficiency and effectiveness of MoEBERT on natural language understanding and question answering tasks.
arXiv Detail & Related papers (2022-04-15T23:19:37Z) - Open Vocabulary Electroencephalography-To-Text Decoding and Zero-shot
Sentiment Classification [78.120927891455]
State-of-the-art brain-to-text systems have achieved great success in decoding language directly from brain signals using neural networks.
In this paper, we extend the problem to open vocabulary Electroencephalography(EEG)-To-Text Sequence-To-Sequence decoding and zero-shot sentence sentiment classification on natural reading tasks.
Our model achieves a 40.1% BLEU-1 score on EEG-To-Text decoding and a 55.6% F1 score on zero-shot EEG-based ternary sentiment classification, which significantly outperforms supervised baselines.
arXiv Detail & Related papers (2021-12-05T21:57:22Z) - AutoBERT-Zero: Evolving BERT Backbone from Scratch [94.89102524181986]
We propose an Operation-Priority Neural Architecture Search (OP-NAS) algorithm to automatically search for promising hybrid backbone architectures.
We optimize both the search algorithm and evaluation of candidate models to boost the efficiency of our proposed OP-NAS.
Experiments show that the searched architecture (named AutoBERT-Zero) significantly outperforms BERT and its variants of different model capacities in various downstream tasks.
arXiv Detail & Related papers (2021-07-15T16:46:01Z) - LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking [62.634516517844496]
We propose LNN-EL, a neuro-symbolic approach that combines the advantages of using interpretable rules with the performance of neural learning.
Even though constrained to using rules, LNN-EL performs competitively against SotA black-box neural approaches.
arXiv Detail & Related papers (2021-06-17T20:22:45Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - DeBERTa: Decoding-enhanced BERT with Disentangled Attention [119.77305080520718]
We propose a new model architecture DeBERTa that improves the BERT and RoBERTa models using two novel techniques.
We show that these techniques significantly improve the efficiency of model pre-training and the performance of both natural language understanding (NLU) and natural langauge generation (NLG) downstream tasks.
arXiv Detail & Related papers (2020-06-05T19:54:34Z) - Abstractive Text Summarization based on Language Model Conditioning and
Locality Modeling [4.525267347429154]
We train a Transformer-based neural model on the BERT language model.
In addition, we propose a new method of BERT-windowing, which allows chunk-wise processing of texts longer than the BERT window size.
The results of our models are compared to a baseline and the state-of-the-art models on the CNN/Daily Mail dataset.
arXiv Detail & Related papers (2020-03-29T14:00:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.