A Comprehensive Survey on Pretrained Foundation Models: A History from
BERT to ChatGPT
- URL: http://arxiv.org/abs/2302.09419v3
- Date: Mon, 1 May 2023 07:48:05 GMT
- Title: A Comprehensive Survey on Pretrained Foundation Models: A History from
BERT to ChatGPT
- Authors: Ce Zhou (1), Qian Li (2), Chen Li (2), Jun Yu (3), Yixin Liu (3),
Guangjing Wang (1), Kai Zhang (3), Cheng Ji (2), Qiben Yan (1), Lifang He
(3), Hao Peng (2), Jianxin Li (2), Jia Wu (4), Ziwei Liu (5), Pengtao Xie
(6), Caiming Xiong (7), Jian Pei (8), Philip S. Yu (9), Lichao Sun (3) ((1)
Michigan State University, (2) Beihang University, (3) Lehigh University, (4)
Macquarie University, (5) Nanyang Technological University, (6) University of
California San Diego, (7) Salesforce AI Research, (8) Duke University, (9)
University of Illinois at Chicago)
- Abstract summary: Pretrained Foundation Models (PFMs) are regarded as the foundation for various downstream tasks with different data modalities.
This study provides a comprehensive review of recent research advancements, challenges, and opportunities for PFMs in text, image, graph, as well as other data modalities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pretrained Foundation Models (PFMs) are regarded as the foundation for
various downstream tasks with different data modalities. A PFM (e.g., BERT,
ChatGPT, and GPT-4) is trained on large-scale data which provides a reasonable
parameter initialization for a wide range of downstream applications. BERT
learns bidirectional encoder representations from Transformers, which are
trained on large datasets as contextual language models. Similarly, the
generative pretrained transformer (GPT) method employs Transformers as the
feature extractor and is trained using an autoregressive paradigm on large
datasets. Recently, ChatGPT shows promising success on large language models,
which applies an autoregressive language model with zero shot or few shot
prompting. The remarkable achievements of PFM have brought significant
breakthroughs to various fields of AI. Numerous studies have proposed different
methods, raising the demand for an updated survey. This study provides a
comprehensive review of recent research advancements, challenges, and
opportunities for PFMs in text, image, graph, as well as other data modalities.
The review covers the basic components and existing pretraining methods used in
natural language processing, computer vision, and graph learning. Additionally,
it explores advanced PFMs used for different data modalities and unified PFMs
that consider data quality and quantity. The review also discusses research
related to the fundamentals of PFMs, such as model efficiency and compression,
security, and privacy. Finally, the study provides key implications, future
research directions, challenges, and open problems in the field of PFMs.
Overall, this survey aims to shed light on the research of the PFMs on
scalability, security, logical reasoning ability, cross-domain learning
ability, and the user-friendly interactive ability for artificial general
intelligence.
Related papers
- Language Modeling on Tabular Data: A Survey of Foundations, Techniques and Evolution [7.681258910515419]
Tabular data presents unique challenges due to its heterogeneous nature and complex structural relationships.
High predictive performance and robustness in tabular data analysis holds significant promise for numerous applications.
The recent advent of large language models, such as GPT and LLaMA, has further revolutionized the field, facilitating more advanced and diverse applications with minimal fine-tuning.
arXiv Detail & Related papers (2024-08-20T04:59:19Z) - Synergizing Foundation Models and Federated Learning: A Survey [23.416321895575507]
This paper discusses the potentials and challenges of synergizing Federated Learning (FL) and Foundation Models (FM)
FL is a collaborative learning paradigm that breaks the barrier of data availability from different participants.
It provides a promising solution to customize and adapt FMs to a wide range of domain-specific tasks using distributed datasets whilst preserving privacy.
arXiv Detail & Related papers (2024-06-18T17:58:09Z) - Progress and Opportunities of Foundation Models in Bioinformatics [77.74411726471439]
Foundations models (FMs) have ushered in a new era in computational biology, especially in the realm of deep learning.
Central to our focus is the application of FMs to specific biological problems, aiming to guide the research community in choosing appropriate FMs for their research needs.
Review analyses challenges and limitations faced by FMs in biology, such as data noise, model explainability, and potential biases.
arXiv Detail & Related papers (2024-02-06T02:29:17Z) - Few-shot learning for automated content analysis: Efficient coding of
arguments and claims in the debate on arms deliveries to Ukraine [0.9576975587953563]
Pre-trained language models (PLM) based on transformer neural networks offer great opportunities to improve automatic content analysis in communication science.
Three characteristics so far impeded the widespread adoption of the methods in the applying disciplines: the dominance of English language models in NLP research, the necessary computing resources, and the effort required to produce training data to fine-tune PLMs.
We test our approach on a realistic use case from communication science to automatically detect claims and arguments together with their stance in the German news debate on arms deliveries to Ukraine.
arXiv Detail & Related papers (2023-12-28T11:39:08Z) - Contrastive Transformer Learning with Proximity Data Generation for
Text-Based Person Search [60.626459715780605]
Given a descriptive text query, text-based person search aims to retrieve the best-matched target person from an image gallery.
Such a cross-modal retrieval task is quite challenging due to significant modality gap, fine-grained differences and insufficiency of annotated data.
In this paper, we propose a simple yet effective dual Transformer model for text-based person search.
arXiv Detail & Related papers (2023-11-15T16:26:49Z) - Learn From Model Beyond Fine-Tuning: A Survey [78.80920533793595]
Learn From Model (LFM) focuses on the research, modification, and design of foundation models (FM) based on the model interface.
The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing.
This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM.
arXiv Detail & Related papers (2023-10-12T10:20:36Z) - Large Language Model as Attributed Training Data Generator: A Tale of
Diversity and Bias [92.41919689753051]
Large language models (LLMs) have been recently leveraged as training data generators for various natural language processing (NLP) tasks.
We investigate training data generation with diversely attributed prompts, which have the potential to yield diverse and attributed generated data.
We show that attributed prompts outperform simple class-conditional prompts in terms of the resulting model's performance.
arXiv Detail & Related papers (2023-06-28T03:31:31Z) - Evaluating Prompt-based Question Answering for Object Prediction in the
Open Research Knowledge Graph [0.0]
This work reports results on adopting prompt-based training of transformers for textitscholarly knowledge graph object prediction
It deviates from the other works proposing entity and relation extraction pipelines for predicting objects of a scholarly knowledge graph.
We find that (i) per expectations, transformer models when tested out-of-the-box underperform on a new domain of data, (ii) prompt-based training of the models achieve performance boosts of up to 40% in a relaxed evaluation setting.
arXiv Detail & Related papers (2023-05-22T10:35:18Z) - Pre-Trained Models: Past, Present and Future [126.21572378910746]
Large-scale pre-trained models (PTMs) have recently achieved great success and become a milestone in the field of artificial intelligence (AI)
By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks.
It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch.
arXiv Detail & Related papers (2021-06-14T02:40:32Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.