Multi-Modal Fusion by Meta-Initialization
- URL: http://arxiv.org/abs/2210.04843v1
- Date: Mon, 10 Oct 2022 17:00:58 GMT
- Title: Multi-Modal Fusion by Meta-Initialization
- Authors: Matthew T. Jackson, Shreshth A. Malik, Michael T. Matthews, Yousuf
Mohamed-Ahmed
- Abstract summary: We propose an extension to the Model-Agnostic Meta-Learning algorithm (MAML)
This allows the model to adapt using auxiliary information as well as task experience.
FuMI significantly outperforms uni-modal baselines such as MAML in the few-shot regime.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When experience is scarce, models may have insufficient information to adapt
to a new task. In this case, auxiliary information - such as a textual
description of the task - can enable improved task inference and adaptation. In
this work, we propose an extension to the Model-Agnostic Meta-Learning
algorithm (MAML), which allows the model to adapt using auxiliary information
as well as task experience. Our method, Fusion by Meta-Initialization (FuMI),
conditions the model initialization on auxiliary information using a
hypernetwork, rather than learning a single, task-agnostic initialization.
Furthermore, motivated by the shortcomings of existing multi-modal few-shot
learning benchmarks, we constructed iNat-Anim - a large-scale image
classification dataset with succinct and visually pertinent textual class
descriptions. On iNat-Anim, FuMI significantly outperforms uni-modal baselines
such as MAML in the few-shot regime. The code for this project and a dataset
exploration tool for iNat-Anim are publicly available at
https://github.com/s-a-malik/multi-few .
Related papers
- Membership Inference Attacks against Large Vision-Language Models [40.996912464828696]
Large vision-language models (VLLMs) exhibit promising capabilities for processing multi-modal tasks across various application scenarios.
Their emergence also raises significant data security concerns, given the potential inclusion of sensitive information, such as private photos and medical records.
Detecting inappropriately used data in VLLMs remains a critical and unresolved issue.
arXiv Detail & Related papers (2024-11-05T08:35:08Z) - Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - RA-BLIP: Multimodal Adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training [55.54020926284334]
Multimodal Large Language Models (MLLMs) have recently received substantial interest, which shows their emerging potential as general-purpose models for various vision-language tasks.
Retrieval augmentation techniques have proven to be effective plugins for both LLMs and MLLMs.
In this study, we propose multimodal adaptive Retrieval-Augmented Bootstrapping Language-Image Pre-training (RA-BLIP), a novel retrieval-augmented framework for various MLLMs.
arXiv Detail & Related papers (2024-10-18T03:45:19Z) - Improve Meta-learning for Few-Shot Text Classification with All You Can Acquire from the Tasks [10.556477506959888]
Existing methods often encounter difficulties in drawing accurate class prototypes from support set samples.
Recent approaches attempt to incorporate external knowledge or pre-trained language models to augment data, but this requires additional resources.
We propose a novel solution by adequately leveraging the information within the task itself.
arXiv Detail & Related papers (2024-10-14T12:47:11Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Fine-Grained Scene Image Classification with Modality-Agnostic Adapter [8.801601759337006]
We present a new multi-modal feature fusion approach named MAA (Modality-Agnostic Adapter)
We eliminate the modal differences in distribution and then use a modality-agnostic Transformer encoder for a semantic-level feature fusion.
Our experiments demonstrate that MAA achieves state-of-the-art results on benchmarks by applying the same modalities with previous methods.
arXiv Detail & Related papers (2024-07-03T02:57:14Z) - 3FM: Multi-modal Meta-learning for Federated Tasks [2.117841684082203]
We introduce a meta-learning framework specifically designed for multimodal federated tasks.
Our approach is motivated by the need to enable federated models to robustly adapt when exposed to new modalities.
We demonstrate that the proposed algorithm achieves better performance than the baseline on a subset of missing modality scenarios.
arXiv Detail & Related papers (2023-12-15T20:03:24Z) - Utilising a Large Language Model to Annotate Subject Metadata: A Case
Study in an Australian National Research Data Catalogue [18.325675189960833]
In support of open and reproducible research, there has been a rapidly increasing number of datasets made available for research.
As the availability of datasets increases, it becomes more important to have quality metadata for discovering and reusing them.
This paper proposes to leverage large language models (LLMs) for cost-effective annotation of subject metadata through the LLM-based in-context learning.
arXiv Detail & Related papers (2023-10-17T14:52:33Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Improving Meta-learning for Low-resource Text Classification and
Generation via Memory Imitation [87.98063273826702]
We propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation.
A theoretical analysis is provided to prove the effectiveness of our method.
arXiv Detail & Related papers (2022-03-22T12:41:55Z) - Single-Modal Entropy based Active Learning for Visual Question Answering [75.1682163844354]
We address Active Learning in the multi-modal setting of Visual Question Answering (VQA)
In light of the multi-modal inputs, image and question, we propose a novel method for effective sample acquisition.
Our novel idea is simple to implement, cost-efficient, and readily adaptable to other multi-modal tasks.
arXiv Detail & Related papers (2021-10-21T05:38:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.