On the Compositional Generalization Gap of In-Context Learning
- URL: http://arxiv.org/abs/2211.08473v1
- Date: Tue, 15 Nov 2022 19:56:37 GMT
- Title: On the Compositional Generalization Gap of In-Context Learning
- Authors: Arian Hosseini, Ankit Vani, Dzmitry Bahdanau, Alessandro Sordoni,
Aaron Courville
- Abstract summary: We look at the gap between the in-distribution (ID) and out-of-distribution (OOD) performance of such models in semantic parsing tasks with in-context learning.
We evaluate four model families, OPT, BLOOM, CodeGen and Codex on three semantic parsing datasets.
- Score: 73.09193595292233
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pretrained large generative language models have shown great performance on
many tasks, but exhibit low compositional generalization abilities. Scaling
such models has been shown to improve their performance on various NLP tasks
even just by conditioning them on a few examples to solve the task without any
fine-tuning (also known as in-context learning). In this work, we look at the
gap between the in-distribution (ID) and out-of-distribution (OOD) performance
of such models in semantic parsing tasks with in-context learning. In the ID
settings, the demonstrations are from the same split (test or train) that the
model is being evaluated on, and in the OOD settings, they are from the other
split. We look at how the relative generalization gap of in-context learning
evolves as models are scaled up. We evaluate four model families, OPT, BLOOM,
CodeGen and Codex on three semantic parsing datasets, CFQ, SCAN and GeoQuery
with different number of exemplars, and observe a trend of decreasing relative
generalization gap as models are scaled up.
Related papers
- Observational Scaling Laws and the Predictability of Language Model Performance [51.2336010244645]
We propose an observational approach that bypasses model training and instead builds scaling laws from 100 publically available models.
We show that several emergent phenomena follow a smooth, sigmoidal behavior and are predictable from small models.
We show how to predict the impact of post-training interventions like Chain-of-Thought and Self-Consistency as language model capabilities continue to improve.
arXiv Detail & Related papers (2024-05-17T17:49:44Z) - Corpus Considerations for Annotator Modeling and Scaling [9.263562546969695]
We show that the commonly used user token model consistently outperforms more complex models.
Our findings shed light on the relationship between corpus statistics and annotator modeling performance.
arXiv Detail & Related papers (2024-04-02T22:27:24Z) - Unveiling the Generalization Power of Fine-Tuned Large Language Models [81.70754292058258]
We investigate whether fine-tuning affects the intrinsic generalization ability intrinsic to Large Language Models (LLMs)
Our main findings reveal that models fine-tuned on generation and classification tasks exhibit dissimilar behaviors in generalizing to different domains and tasks.
We observe that integrating the in-context learning strategy during fine-tuning on generation tasks can enhance the model's generalization ability.
arXiv Detail & Related papers (2024-03-14T08:18:59Z) - Robust Fine-Tuning of Vision-Language Models for Domain Generalization [6.7181844004432385]
Foundation models have impressive zero-shot inference capabilities and robustness under distribution shifts.
We present a new recipe for few-shot fine-tuning of the popular vision-language foundation model CLIP.
Our experimentation demonstrates that, while zero-shot CLIP fails to match performance of trained vision models on more complex benchmarks, few-shot CLIP fine-tuning outperforms its vision-only counterparts.
arXiv Detail & Related papers (2023-11-03T20:50:40Z) - Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and
Evaluation [35.72916406365469]
We compare the generalization of few-shot fine-tuning and in-context learning to challenge datasets.
Our results show that fine-tuned language models can in fact generalize well out-of-domain.
arXiv Detail & Related papers (2023-05-26T13:55:17Z) - Training Trajectories of Language Models Across Scales [99.38721327771208]
Scaling up language models has led to unprecedented performance gains.
How do language models of different sizes learn during pre-training?
Why do larger language models demonstrate more desirable behaviors?
arXiv Detail & Related papers (2022-12-19T19:16:29Z) - Assessing Out-of-Domain Language Model Performance from Few Examples [38.245449474937914]
We address the task of predicting out-of-domain (OOD) performance in a few-shot fashion.
We benchmark the performance on this task when looking at model accuracy on the few-shot examples.
We show that attribution-based factors can help rank relative model OOD performance.
arXiv Detail & Related papers (2022-10-13T04:45:26Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis [90.24921443175514]
We focus on aspect-based sentiment analysis, which involves extracting aspect term, category, and predicting their corresponding polarities.
We propose to reformulate the extraction and prediction tasks into the sequence generation task, using a generative language model with unidirectional attention.
Our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.
arXiv Detail & Related papers (2022-04-11T18:31:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.