Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics
- URL: http://arxiv.org/abs/2110.01518v1
- Date: Mon, 4 Oct 2021 15:37:07 GMT
- Title: Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics
- Authors: Prajjwal Bhargava, Aleksandr Drozd, Anna Rogers
- Abstract summary: We conduct a case study of generalization in NLI in a range of BERT-based architectures.
We report 2 successful and 3 unsuccessful strategies, all providing insights into how Transformer-based models learn to generalize.
- Score: 78.6177778161625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Much of recent progress in NLU was shown to be due to models' learning
dataset-specific heuristics. We conduct a case study of generalization in NLI
(from MNLI to the adversarially constructed HANS dataset) in a range of
BERT-based architectures (adapters, Siamese Transformers, HEX debiasing), as
well as with subsampling the data and increasing the model size. We report 2
successful and 3 unsuccessful strategies, all providing insights into how
Transformer-based models learn to generalize.
Related papers
- Improved Generalization Bounds for Communication Efficient Federated Learning [4.3707341422218215]
This paper focuses on reducing the communication cost of federated learning by exploring generalization bounds and representation learning.
We design a novel Federated Learning with Adaptive Local Steps (FedALS) algorithm based on our generalization bound and representation learning analysis.
arXiv Detail & Related papers (2024-04-17T21:17:48Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - N-Grammer: Augmenting Transformers with latent n-grams [35.39961549040385]
We propose a simple yet effective modification to the Transformer architecture inspired by the literature in statistical language modeling, by augmenting the model with n-grams that are constructed from a discrete latent representation of the text sequence.
We evaluate our model, the N-Grammer on language modeling on the C4 data-set as well as text classification on the SuperGLUE data-set, and find that it outperforms several strong baselines such as the Transformer and the Primer.
arXiv Detail & Related papers (2022-07-13T17:18:02Z) - A new hope for network model generalization [66.5377859849467]
Generalizing machine learning models for network traffic dynamics tends to be considered a lost cause.
An ML architecture called_Transformer_ has enabled previously unimaginable generalization in other domains.
We propose a Network Traffic Transformer (NTT) to learn network dynamics from packet traces.
arXiv Detail & Related papers (2022-07-12T21:16:38Z) - Transformers: "The End of History" for NLP? [17.36054090232896]
We shed light on some important theoretical limitations of pre-trained BERT-style models.
We show that addressing these limitations can yield sizable improvements over vanilla RoBERTa and XLNet.
We offer a more general discussion on desiderata for future additions to the Transformer architecture.
arXiv Detail & Related papers (2021-04-09T08:29:42Z) - SIT3: Code Summarization with Structure-Induced Transformer [48.000063280183376]
We propose a novel model based on structure-induced self-attention, which encodes sequential inputs with highly-effective structure modeling.
Our newly-proposed model achieves new state-of-the-art results on popular benchmarks.
arXiv Detail & Related papers (2020-12-29T11:37:43Z) - Learning Contextual Representations for Semantic Parsing with
Generation-Augmented Pre-Training [86.91380874390778]
We present Generation-Augmented Pre-training (GAP), that jointly learns representations of natural language utterances and table schemas by leveraging generation models to generate pre-train data.
Based on experimental results, neural semantics that leverage GAP MODEL obtain new state-of-the-art results on both SPIDER and CRITERIA-TO-generative benchmarks.
arXiv Detail & Related papers (2020-12-18T15:53:50Z) - Generative Adversarial Networks for Annotated Data Augmentation in Data
Sparse NLU [0.76146285961466]
Data sparsity is one of the key challenges associated with model development in Natural Language Understanding.
We present our results on boosting NLU model performance through training data augmentation using a sequential generative adversarial network (GAN)
Our experiments reveal synthetic data generated using the sequential generative adversarial network provides significant performance boosts across multiple metrics.
arXiv Detail & Related papers (2020-12-09T20:38:17Z) - KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation [100.79870384880333]
We propose a knowledge-grounded pre-training (KGPT) to generate knowledge-enriched text.
We adopt three settings, namely fully-supervised, zero-shot, few-shot to evaluate its effectiveness.
Under zero-shot setting, our model achieves over 30 ROUGE-L on WebNLG while all other baselines fail.
arXiv Detail & Related papers (2020-10-05T19:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.