Jointly Training Large Autoregressive Multimodal Models
- URL: http://arxiv.org/abs/2309.15564v2
- Date: Thu, 28 Sep 2023 17:23:11 GMT
- Title: Jointly Training Large Autoregressive Multimodal Models
- Authors: Emanuele Aiello, Lili Yu, Yixin Nie, Armen Aghajanyan, Barlas Oguz
- Abstract summary: We present the Joint Autoregressive Mixture (JAM) framework, a modular approach that systematically fuses existing text and image generation models.
We also introduce a specialized, data-efficient instruction-tuning strategy, tailored for mixed-modal generation tasks.
Our final instruct-tuned model demonstrates unparalleled performance in generating high-quality multimodal outputs.
- Score: 37.32912103934043
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, advances in the large-scale pretraining of language and
text-to-image models have revolutionized the field of machine learning. Yet,
integrating these two modalities into a single, robust model capable of
generating seamless multimodal outputs remains a significant challenge. To
address this gap, we present the Joint Autoregressive Mixture (JAM) framework,
a modular approach that systematically fuses existing text and image generation
models. We also introduce a specialized, data-efficient instruction-tuning
strategy, tailored for mixed-modal generation tasks. Our final instruct-tuned
model demonstrates unparalleled performance in generating high-quality
multimodal outputs and represents the first model explicitly designed for this
purpose.
Related papers
- Learning Multimodal Latent Generative Models with Energy-Based Prior [3.6648642834198797]
We propose a novel framework that integrates the latent generative model with the EBM.
This approach results in a more expressive and informative prior, better-capturing of information across multiple modalities.
arXiv Detail & Related papers (2024-09-30T01:38:26Z) - Multi-Modal Generative AI: Multi-modal LLM, Diffusion and Beyond [48.43910061720815]
Multi-modal generative AI has received increasing attention in both academia and industry.
One natural question arises: Is it possible to have a unified model for both understanding and generation?
arXiv Detail & Related papers (2024-09-23T13:16:09Z) - Efficient Multi-Task Large Model Training via Data Heterogeneity-aware Model Management [35.06717005729781]
Recent foundation models are capable of handling multiple machine learning (ML) tasks and multiple data modalities with the unified base model structure and several specialized model components.
Development of such multi-task (MT) multi-modal (MM) models poses significant model management challenges to existing training systems.
We build a prototype system and evaluate it on various large MT MM models.
Experiments demonstrate the superior performance and efficiency of our system, with speedup ratio up to 71% compared to state-of-the-art training systems.
arXiv Detail & Related papers (2024-09-05T09:10:40Z) - ANOLE: An Open, Autoregressive, Native Large Multimodal Models for Interleaved Image-Text Generation [27.773146599559286]
Anole is an open, autoregressive, native large multimodal model for interleaved image-text generation.
We have open-sourced our model, training framework, and instruction tuning data.
arXiv Detail & Related papers (2024-07-08T17:08:02Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Unified Discrete Diffusion for Simultaneous Vision-Language Generation [78.21352271140472]
We present a unified multimodal generation model that can conduct both the "modality translation" and "multi-modality generation" tasks.
Specifically, we unify the discrete diffusion process for multimodal signals by proposing a unified transition matrix.
Our proposed method can perform comparably to the state-of-the-art solutions in various generation tasks.
arXiv Detail & Related papers (2022-11-27T14:46:01Z) - Improving Non-autoregressive Generation with Mixup Training [51.61038444990301]
We present a non-autoregressive generation model based on pre-trained transformer models.
We propose a simple and effective iterative training method called MIx Source and pseudo Target.
Our experiments on three generation benchmarks including question generation, summarization and paraphrase generation, show that the proposed framework achieves the new state-of-the-art results.
arXiv Detail & Related papers (2021-10-21T13:04:21Z) - InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining [76.32065400614162]
We propose a novel model, namely InterBERT (BERT for Interaction), which is the first model of our series of multimodal pretraining methods M6.
The model owns strong capability of modeling interaction between the information flows of different modalities.
We propose a large-scale dataset for multi-modal pretraining in Chinese, and we develop the Chinese InterBERT which is the first Chinese multi-modal pretrained model.
arXiv Detail & Related papers (2020-03-30T03:13:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.