FETA: Towards Specializing Foundation Models for Expert Task
Applications
- URL: http://arxiv.org/abs/2209.03648v1
- Date: Thu, 8 Sep 2022 08:47:57 GMT
- Title: FETA: Towards Specializing Foundation Models for Expert Task
Applications
- Authors: Amit Alfassy, Assaf Arbelle, Oshri Halimi, Sivan Harary, Roei Herzig,
Eli Schwartz, Rameswar Panda, Michele Dolfi, Christoph Auer, Kate Saenko,
PeterW. J. Staar, Rogerio Feris, Leonid Karlinsky
- Abstract summary: Foundation Models (FMs) have demonstrated unprecedented capabilities including zero-shot learning, high fidelity data synthesis, and out of domain generalization.
We show in this paper that FMs still have poor out-of-the-box performance on expert tasks.
We propose a first of its kind FETA benchmark built around the task of teaching FMs to understand technical documentation.
- Score: 49.57393504125937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Foundation Models (FMs) have demonstrated unprecedented capabilities
including zero-shot learning, high fidelity data synthesis, and out of domain
generalization. However, as we show in this paper, FMs still have poor
out-of-the-box performance on expert tasks (e.g. retrieval of car manuals
technical illustrations from language queries), data for which is either unseen
or belonging to a long-tail part of the data distribution of the huge datasets
used for FM pre-training. This underlines the necessity to explicitly evaluate
and finetune FMs on such expert tasks, arguably ones that appear the most in
practical real-world applications. In this paper, we propose a first of its
kind FETA benchmark built around the task of teaching FMs to understand
technical documentation, via learning to match their graphical illustrations to
corresponding language descriptions. Our FETA benchmark focuses on
text-to-image and image-to-text retrieval in public car manuals and sales
catalogue brochures. FETA is equipped with a procedure for completely automatic
annotation extraction (code would be released upon acceptance), allowing easy
extension of FETA to more documentation types and application domains in the
future. Our automatic annotation leads to an automated performance metric shown
to be consistent with metrics computed on human-curated annotations (also
released). We provide multiple baselines and analysis of popular FMs on FETA
leading to several interesting findings that we believe would be very valuable
to the FM community, paving the way towards real-world application of FMs for
practical expert tasks currently 'overlooked' by standard benchmarks focusing
on common objects.
Related papers
- Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - Benchmarking Foundation Models on Exceptional Cases: Dataset Creation and Validation [11.562935582384098]
This paper develops a novel dataset for evaluation of FMs across multiple modalities, including graphic novels, calligraphy, news articles, and lyrics.
It includes tasks for instance classification, character recognition, token prediction, and text generation.
The paper also proposes prompt engineering techniques like Chain-of-Few (CoT) and CoT+Thought-Shot to enhance performance.
arXiv Detail & Related papers (2024-10-23T16:24:23Z) - Software Engineering and Foundation Models: Insights from Industry Blogs Using a Jury of Foundation Models [11.993910471523073]
We analyze 155 FM4SE and 997 SE4FM blog posts from leading technology companies.
We observed that while code generation is the most prominent FM4SE task, FMs are leveraged for many other SE activities.
Although the emphasis is on cloud deployments, there is a growing interest in compressing FMs and deploying them on smaller devices.
arXiv Detail & Related papers (2024-10-11T17:27:04Z) - AutoFAIR : Automatic Data FAIRification via Machine Reading [28.683653852643015]
We propose AutoFAIR, an architecture designed to enhance data FAIRness automately.
We align each data and metadata operation with specific FAIR indicators to guide machine-executable actions.
We observe significant improvements in findability, accessibility, interoperability, and reusability of data.
arXiv Detail & Related papers (2024-08-07T17:36:58Z) - Instruct and Extract: Instruction Tuning for On-Demand Information
Extraction [86.29491354355356]
On-Demand Information Extraction aims to fulfill the personalized demands of real-world users.
We present a benchmark named InstructIE, inclusive of both automatically generated training data, as well as the human-annotated test set.
Building on InstructIE, we further develop an On-Demand Information Extractor, ODIE.
arXiv Detail & Related papers (2023-10-24T17:54:25Z) - Learn From Model Beyond Fine-Tuning: A Survey [78.80920533793595]
Learn From Model (LFM) focuses on the research, modification, and design of foundation models (FM) based on the model interface.
The study of LFM techniques can be broadly categorized into five major areas: model tuning, model distillation, model reuse, meta learning and model editing.
This paper gives a comprehensive review of the current methods based on FM from the perspective of LFM.
arXiv Detail & Related papers (2023-10-12T10:20:36Z) - Leveraging Contextual Information for Effective Entity Salience Detection [21.30389576465761]
We show that fine-tuning medium-sized language models with a cross-encoder style architecture yields substantial performance gains over feature engineering approaches.
We also show that zero-shot prompting of instruction-tuned language models yields inferior results, indicating the task's uniqueness and complexity.
arXiv Detail & Related papers (2023-09-14T19:04:40Z) - Unsupervised Sentiment Analysis of Plastic Surgery Social Media Posts [91.3755431537592]
The massive collection of user posts across social media platforms is primarily untapped for artificial intelligence (AI) use cases.
Natural language processing (NLP) is a subfield of AI that leverages bodies of documents, known as corpora, to train computers in human-like language understanding.
This study demonstrates that the applied results of unsupervised analysis allow a computer to predict either negative, positive, or neutral user sentiment towards plastic surgery.
arXiv Detail & Related papers (2023-07-05T20:16:20Z) - Modeling Entities as Semantic Points for Visual Information Extraction
in the Wild [55.91783742370978]
We propose an alternative approach to precisely and robustly extract key information from document images.
We explicitly model entities as semantic points, i.e., center points of entities are enriched with semantic information describing the attributes and relationships of different entities.
The proposed method can achieve significantly enhanced performance on entity labeling and linking, compared with previous state-of-the-art models.
arXiv Detail & Related papers (2023-03-23T08:21:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.