MultiMAE: Multi-modal Multi-task Masked Autoencoders
- URL: http://arxiv.org/abs/2204.01678v1
- Date: Mon, 4 Apr 2022 17:50:41 GMT
- Title: MultiMAE: Multi-modal Multi-task Masked Autoencoders
- Authors: Roman Bachmann, David Mizrahi, Andrei Atanov, Amir Zamir
- Abstract summary: We propose a pre-training strategy called Multi-modal Multi-task Masked Autoencoders (MultiMAE)
We show this pre-training strategy leads to a flexible, simple, and efficient framework with improved transfer results to downstream tasks.
- Score: 2.6763498831034043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a pre-training strategy called Multi-modal Multi-task Masked
Autoencoders (MultiMAE). It differs from standard Masked Autoencoding in two
key aspects: I) it can optionally accept additional modalities of information
in the input besides the RGB image (hence "multi-modal"), and II) its training
objective accordingly includes predicting multiple outputs besides the RGB
image (hence "multi-task").
We make use of masking (across image patches and input modalities) to make
training MultiMAE tractable as well as to ensure cross-modality predictive
coding is indeed learned by the network. We show this pre-training strategy
leads to a flexible, simple, and efficient framework with improved transfer
results to downstream tasks. In particular, the same exact pre-trained network
can be flexibly used when additional information besides RGB images is
available or when no information other than RGB is available - in all
configurations yielding competitive to or significantly better results than the
baselines. To avoid needing training datasets with multiple modalities and
tasks, we train MultiMAE entirely using pseudo labeling, which makes the
framework widely applicable to any RGB dataset.
The experiments are performed on multiple transfer tasks (image
classification, semantic segmentation, depth estimation) and datasets
(ImageNet, ADE20K, Taskonomy, Hypersim, NYUv2). The results show an
intriguingly impressive capability by the model in cross-modal/task predictive
coding and transfer.
Related papers
- Adapting Segment Anything Model to Multi-modal Salient Object Detection with Semantic Feature Fusion Guidance [15.435695491233982]
We propose a novel framework to explore and exploit the powerful feature representation and zero-shot generalization ability of the Segment Anything Model (SAM) for multi-modal salient object detection (SOD)
We develop underlineSAM with seunderlinemantic funderlineeature fuunderlinesion guidancunderlinee (Sammese)
In the image encoder, a multi-modal adapter is proposed to adapt the single-modal SAM to multi-modal information. Specifically, in the mask decoder, a semantic-geometric
arXiv Detail & Related papers (2024-08-27T13:47:31Z) - Large Language Models for Multimodal Deformable Image Registration [50.91473745610945]
We propose a novel coarse-to-fine MDIR framework,LLM-Morph, for aligning the deep features from different modal medical images.
Specifically, we first utilize a CNN encoder to extract deep visual features from cross-modal image pairs, then we use the first adapter to adjust these tokens, and use LoRA in pre-trained LLMs to fine-tune their weights.
Third, for the alignment of tokens, we utilize other four adapters to transform the LLM-encoded tokens into multi-scale visual features, generating multi-scale deformation fields and facilitating the coarse-to-fine MDIR task
arXiv Detail & Related papers (2024-08-20T09:58:30Z) - Instruction-Guided Visual Masking [25.26544571379426]
Instruction-guided Visual Masking (IVM) is a versatile visual grounding model that is compatible with diverse multimodal models.
IVM-enhanced multimodal models can effectively focus on task-relevant image regions to better align with complex instructions.
arXiv Detail & Related papers (2024-05-30T07:48:32Z) - Jack of All Tasks, Master of Many: Designing General-purpose Coarse-to-Fine Vision-Language Model [83.85856356798531]
VistaLLM is a visual system that addresses coarse- and fine-grained vision-language tasks.
It employs a gradient-aware adaptive sampling technique to represent binary segmentation masks as sequences.
We also introduce a novel task, AttCoSeg, which boosts the model's reasoning and grounding capability over multiple input images.
arXiv Detail & Related papers (2023-12-19T18:53:01Z) - Multi-scale Transformer Network with Edge-aware Pre-training for
Cross-Modality MR Image Synthesis [52.41439725865149]
Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones.
Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model.
We propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis.
arXiv Detail & Related papers (2022-12-02T11:40:40Z) - Multimodal Masked Autoencoders Learn Transferable Representations [127.35955819874063]
We propose a simple and scalable network architecture, the Multimodal Masked Autoencoder (M3AE)
M3AE learns a unified encoder for both vision and language data via masked token prediction.
We provide an empirical study of M3AE trained on a large-scale image-text dataset, and find that M3AE is able to learn generalizable representations that transfer well to downstream tasks.
arXiv Detail & Related papers (2022-05-27T19:09:42Z) - UFO: A UniFied TransfOrmer for Vision-Language Representation Learning [54.82482779792115]
We propose a single UniFied transfOrmer (UFO) capable of processing either unimodal inputs (e.g., image or language) or multimodal inputs (e.g., the concatenation of the image and the question) for vision-language (VL) representation learning.
Existing approaches typically design an individual network for each modality and/or a specific fusion network for multimodal tasks.
arXiv Detail & Related papers (2021-11-19T03:23:10Z) - Self-Supervised Representation Learning for RGB-D Salient Object
Detection [93.17479956795862]
We use Self-Supervised Representation Learning to design two pretext tasks: the cross-modal auto-encoder and the depth-contour estimation.
Our pretext tasks require only a few and un RGB-D datasets to perform pre-training, which make the network capture rich semantic contexts.
For the inherent problem of cross-modal fusion in RGB-D SOD, we propose a multi-path fusion module.
arXiv Detail & Related papers (2021-01-29T09:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.