FETA: Towards Specializing Foundation Models for Expert Task
Applications
- URL: http://arxiv.org/abs/2209.03648v1
- Date: Thu, 8 Sep 2022 08:47:57 GMT
- Title: FETA: Towards Specializing Foundation Models for Expert Task
Applications
- Authors: Amit Alfassy, Assaf Arbelle, Oshri Halimi, Sivan Harary, Roei Herzig,
Eli Schwartz, Rameswar Panda, Michele Dolfi, Christoph Auer, Kate Saenko,
PeterW. J. Staar, Rogerio Feris, Leonid Karlinsky
- Abstract summary: Foundation Models (FMs) have demonstrated unprecedented capabilities including zero-shot learning, high fidelity data synthesis, and out of domain generalization.
We show in this paper that FMs still have poor out-of-the-box performance on expert tasks.
We propose a first of its kind FETA benchmark built around the task of teaching FMs to understand technical documentation.
- Score: 49.57393504125937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foundation Models (FMs) have demonstrated unprecedented capabilities
including zero-shot learning, high fidelity data synthesis, and out of domain
generalization. However, as we show in this paper, FMs still have poor
out-of-the-box performance on expert tasks (e.g. retrieval of car manuals
technical illustrations from language queries), data for which is either unseen
or belonging to a long-tail part of the data distribution of the huge datasets
used for FM pre-training. This underlines the necessity to explicitly evaluate
and finetune FMs on such expert tasks, arguably ones that appear the most in
practical real-world applications. In this paper, we propose a first of its
kind FETA benchmark built around the task of teaching FMs to understand
technical documentation, via learning to match their graphical illustrations to
corresponding language descriptions. Our FETA benchmark focuses on
text-to-image and image-to-text retrieval in public car manuals and sales
catalogue brochures. FETA is equipped with a procedure for completely automatic
annotation extraction (code would be released upon acceptance), allowing easy
extension of FETA to more documentation types and application domains in the
future. Our automatic annotation leads to an automated performance metric shown
to be consistent with metrics computed on human-curated annotations (also
released). We provide multiple baselines and analysis of popular FMs on FETA
leading to several interesting findings that we believe would be very valuable
to the FM community, paving the way towards real-world application of FMs for
practical expert tasks currently 'overlooked' by standard benchmarks focusing
on common objects.
Related papers
- Instruct and Extract: Instruction Tuning for On-Demand Information
Extraction [86.29491354355356]
On-Demand Information Extraction aims to fulfill the personalized demands of real-world users.
We present a benchmark named InstructIE, inclusive of both automatically generated training data, as well as the human-annotated test set.
Building on InstructIE, we further develop an On-Demand Information Extractor, ODIE.
arXiv Detail & Related papers (2023-10-24T17:54:25Z) - Learn From Model Beyond Fine-Tuning: A Survey [78.80920533793595]
Learn From Model (LFM) focuses on the research, modification, and design of foundation models (FM) based on the model interface.
The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing.
This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM.
arXiv Detail & Related papers (2023-10-12T10:20:36Z) - Leveraging Contextual Information for Effective Entity Salience Detection [21.30389576465761]
We show that fine-tuning medium-sized language models with a cross-encoder style architecture yields substantial performance gains over feature engineering approaches.
We also show that zero-shot prompting of instruction-tuned language models yields inferior results, indicating the task's uniqueness and complexity.
arXiv Detail & Related papers (2023-09-14T19:04:40Z) - VideoGLUE: Video General Understanding Evaluation of Foundation Models [90.54934154766585]
We evaluate existing foundation models video understanding capabilities using a carefully designed experiment.
We propose a VideoGLUE score (VGS) to measure an FMs efficacy and efficiency when adapting to general video understanding tasks.
arXiv Detail & Related papers (2023-07-06T17:47:52Z) - Unsupervised Sentiment Analysis of Plastic Surgery Social Media Posts [91.3755431537592]
The massive collection of user posts across social media platforms is primarily untapped for artificial intelligence (AI) use cases.
Natural language processing (NLP) is a subfield of AI that leverages bodies of documents, known as corpora, to train computers in human-like language understanding.
This study demonstrates that the applied results of unsupervised analysis allow a computer to predict either negative, positive, or neutral user sentiment towards plastic surgery.
arXiv Detail & Related papers (2023-07-05T20:16:20Z) - Text2Seg: Remote Sensing Image Semantic Segmentation via Text-Guided
Visual Foundation Models [5.360103006279672]
This study focuses on the remote sensing domain, where the images are notably dissimilar from those in conventional scenarios.
We developed a pipeline that leverages multiple foundation models to facilitate remote sensing image semantic segmentation tasks guided by text prompt.
The pipeline is benchmarked on several widely-used remote sensing datasets, and we present preliminary results to demonstrate its effectiveness.
arXiv Detail & Related papers (2023-04-20T18:39:41Z) - Modeling Entities as Semantic Points for Visual Information Extraction
in the Wild [55.91783742370978]
We propose an alternative approach to precisely and robustly extract key information from document images.
We explicitly model entities as semantic points, i.e., center points of entities are enriched with semantic information describing the attributes and relationships of different entities.
The proposed method can achieve significantly enhanced performance on entity labeling and linking, compared with previous state-of-the-art models.
arXiv Detail & Related papers (2023-03-23T08:21:16Z) - Multi-Modal Fusion by Meta-Initialization [0.0]
We propose an extension to the Model-Agnostic Meta-Learning algorithm (MAML)
This allows the model to adapt using auxiliary information as well as task experience.
FuMI significantly outperforms uni-modal baselines such as MAML in the few-shot regime.
arXiv Detail & Related papers (2022-10-10T17:00:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.