Approximating Human-Like Few-shot Learning with GPT-based Compression
- URL: http://arxiv.org/abs/2308.06942v1
- Date: Mon, 14 Aug 2023 05:22:33 GMT
- Title: Approximating Human-Like Few-shot Learning with GPT-based Compression
- Authors: Cynthia Huang, Yuqing Xie, Zhiying Jiang, Jimmy Lin, Ming Li
- Abstract summary: We seek to equip generative pre-trained models with human-like learning capabilities that enable data compression during inference.
We present a novel approach that utilizes the Generative Pre-trained Transformer (GPT) to approximate Kolmogorov complexity.
- Score: 55.699707962017975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we conceptualize the learning process as information
compression. We seek to equip generative pre-trained models with human-like
learning capabilities that enable data compression during inference. We present
a novel approach that utilizes the Generative Pre-trained Transformer (GPT) to
approximate Kolmogorov complexity, with the aim of estimating the optimal
Information Distance for few-shot learning. We first propose using GPT as a
prior for lossless text compression, achieving a noteworthy compression ratio.
Experiment with LLAMA2-7B backbone achieves a compression ratio of 15.5 on
enwik9. We justify the pre-training objective of GPT models by demonstrating
its equivalence to the compression length, and, consequently, its ability to
approximate the information distance for texts. Leveraging the approximated
information distance, our method allows the direct application of GPT models in
quantitative text similarity measurements. Experiment results show that our
method overall achieves superior performance compared to embedding and prompt
baselines on challenging NLP tasks, including semantic similarity, zero and
one-shot text classification, and zero-shot text ranking.
Related papers
- Selection-p: Self-Supervised Task-Agnostic Prompt Compression for Faithfulness and Transferability [67.77534983324229]
In this paper, we investigate the ability of Large Language Models to develop a unified compression method that discretizes uninformative tokens.
Experiments show Selection-p achieves state-of-the-art performance across numerous classification tasks.
It exhibits superior transferability to different models compared to prior work.
arXiv Detail & Related papers (2024-10-15T17:05:25Z) - Point Cloud Compression with Bits-back Coding [32.9521748764196]
This paper specializes in using a deep learning-based probabilistic model to estimate the Shannon's entropy of the point cloud information.
Once the entropy of the point cloud dataset is estimated, we use the learned CVAE model to compress the geometric attributes of the point clouds.
The novelty of our method with bits-back coding specializes in utilizing the learned latent variable model of the CVAE to compress the point cloud data.
arXiv Detail & Related papers (2024-10-09T06:34:48Z) - Ranking LLMs by compression [13.801767671391604]
We use five large language models as priors for compression, then compare their performance on challenging natural language processing tasks.
Experimental results show that compression ratio and model performance are positively correlated, so it can be used as a general metric to evaluate large language models.
arXiv Detail & Related papers (2024-06-20T10:23:38Z) - Information-Theoretic Distillation for Reference-less Summarization [67.51150817011617]
We present a novel framework to distill a powerful summarizer based on the information-theoretic objective for summarization.
We start off from Pythia-2.8B as the teacher model, which is not yet capable of summarization.
We arrive at a compact but powerful summarizer with only 568M parameters that performs competitively against ChatGPT.
arXiv Detail & Related papers (2024-03-20T17:42:08Z) - Approximated Prompt Tuning for Vision-Language Pre-trained Models [54.326232586461614]
In vision-language pre-trained models, prompt tuning often requires a large number of learnable tokens to bridge the gap between the pre-training and downstream tasks.
We propose a novel Approximated Prompt Tuning (APT) approach towards efficient VL transfer learning.
arXiv Detail & Related papers (2023-06-27T05:43:47Z) - Selective In-Context Data Augmentation for Intent Detection using
Pointwise V-Information [100.03188187735624]
We introduce a novel approach based on PLMs and pointwise V-information (PVI), a metric that can measure the usefulness of a datapoint for training a model.
Our method first fine-tunes a PLM on a small seed of training data and then synthesizes new datapoints - utterances that correspond to given intents.
Our method is thus able to leverage the expressive power of large language models to produce diverse training data.
arXiv Detail & Related papers (2023-02-10T07:37:49Z) - Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language
Models [107.05966685291067]
We propose test-time prompt tuning (TPT) to learn adaptive prompts on the fly with a single test sample.
TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average.
In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data.
arXiv Detail & Related papers (2022-09-15T17:55:11Z) - End-to-end Learning of Compressible Features [35.40108701875527]
Pre-trained convolutional neural networks (CNNs) are powerful off-the-shelf feature generators.
CNNs are powerful off-the-shelf feature generators and have been shown to perform very well on a variety of tasks.
Unfortunately, the generated features are high dimensional and expensive to store.
We propose a learned method that jointly optimize for compressibility along with the task objective.
arXiv Detail & Related papers (2020-07-23T05:17:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.