Leveraging Organizational Resources to Adapt Models to New Data
Modalities
- URL: http://arxiv.org/abs/2008.09983v1
- Date: Sun, 23 Aug 2020 07:29:00 GMT
- Title: Leveraging Organizational Resources to Adapt Models to New Data
Modalities
- Authors: Sahaana Suri, Raghuveer Chanda, Neslihan Bulut, Pradyumna Narayana,
Yemao Zeng, Peter Bailis, Sugato Basu, Girija Narlikar, Christopher Re, and
Abishek Sethi
- Abstract summary: As applications in large organizations evolve, the machine learning (ML) models that power them must adapt the same predictive tasks to newly arising data modalities.
We demonstrate how organizational resources, in the form of aggregate statistics, knowledge bases, and existing services that operate over related tasks, enable teams to construct a common feature space.
We study how this use of organizational resources composes at production scale in over 5 classification tasks at Google, and demonstrate how it reduces the time needed to develop models for new modalities from months to weeks to days.
- Score: 13.880434936862928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As applications in large organizations evolve, the machine learning (ML)
models that power them must adapt the same predictive tasks to newly arising
data modalities (e.g., a new video content launch in a social media application
requires existing text or image models to extend to video). To solve this
problem, organizations typically create ML pipelines from scratch. However,
this fails to utilize the domain expertise and data they have cultivated from
developing tasks for existing modalities. We demonstrate how organizational
resources, in the form of aggregate statistics, knowledge bases, and existing
services that operate over related tasks, enable teams to construct a common
feature space that connects new and existing data modalities. This allows teams
to apply methods for training data curation (e.g., weak supervision and label
propagation) and model training (e.g., forms of multi-modal learning) across
these different data modalities. We study how this use of organizational
resources composes at production scale in over 5 classification tasks at
Google, and demonstrate how it reduces the time needed to develop models for
new modalities from months to weeks to days.
Related papers
- Specialized Foundation Models Struggle to Beat Supervised Baselines [60.23386520331143]
We look at three modalities -- genomics, satellite imaging, and time series -- with multiple recent FMs and compare them to a standard supervised learning workflow.
We find that it is consistently possible to train simple supervised models that match or even outperform the latest foundation models.
arXiv Detail & Related papers (2024-11-05T04:10:59Z) - Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - LLM Augmented LLMs: Expanding Capabilities through Composition [56.40953749310957]
CALM -- Composition to Augment Language Models -- introduces cross-attention between models to compose their representations and enable new capabilities.
We illustrate that augmenting PaLM2-S with a smaller model trained on low-resource languages results in an absolute improvement of up to 13% on tasks like translation into English.
When PaLM2-S is augmented with a code-specific model, we see a relative improvement of 40% over the base model for code generation and explanation tasks.
arXiv Detail & Related papers (2024-01-04T18:53:01Z) - Diffusion Model is an Effective Planner and Data Synthesizer for
Multi-Task Reinforcement Learning [101.66860222415512]
Multi-Task Diffusion Model (textscMTDiff) is a diffusion-based method that incorporates Transformer backbones and prompt learning for generative planning and data synthesis.
For generative planning, we find textscMTDiff outperforms state-of-the-art algorithms across 50 tasks on Meta-World and 8 maps on Maze2D.
arXiv Detail & Related papers (2023-05-29T05:20:38Z) - Exploring In-Context Learning Capabilities of Foundation Models for
Generating Knowledge Graphs from Text [3.114960935006655]
This paper aims to improve the state of the art of automatic construction and completion of knowledge graphs from text.
In this context, one emerging paradigm is in-context learning where a language model is used as it is with a prompt.
arXiv Detail & Related papers (2023-05-15T17:10:19Z) - Preventing Catastrophic Forgetting in Continual Learning of New Natural
Language Tasks [17.879087904904935]
Multi-Task Learning (MTL) is widely-accepted in Natural Language Processing as a standard technique for learning multiple related tasks in one model.
As systems usually evolve over time, adding a new task to an existing MTL model usually requires retraining the model from scratch on all the tasks.
In this paper, we approach the problem of incrementally expanding MTL models' capability to solve new tasks over time by distilling the knowledge of an already trained model on n tasks into a new one for solving n+1 tasks.
arXiv Detail & Related papers (2023-02-22T00:18:25Z) - A Meta-Learning Approach to Population-Based Modelling of Structures [0.0]
A major problem of machine-learning approaches in structural dynamics is the frequent lack of structural data.
Inspired by the recently-emerging field of population-based structural health monitoring, this work attempts to create models that are able to transfer knowledge within populations of structures.
The models trained using meta-learning approaches, are able to outperform conventional machine learning methods regarding inference about structures of the population.
arXiv Detail & Related papers (2023-02-15T23:01:59Z) - Learning Adaptable Policy via Meta-Adversarial Inverse Reinforcement
Learning for Decision-making Tasks [2.1485350418225244]
We build an adaptable imitation learning model based on the integration of Meta-learning and Adversarial Inverse Reinforcement Learning.
We exploit the adversarial learning and inverse reinforcement learning mechanisms to learn policies and reward functions simultaneously from available training tasks.
arXiv Detail & Related papers (2021-03-23T17:16:38Z) - DeGAN : Data-Enriching GAN for Retrieving Representative Samples from a
Trained Classifier [58.979104709647295]
We bridge the gap between the abundance of available data and lack of relevant data, for the future learning tasks of a trained network.
We use the available data, that may be an imbalanced subset of the original training dataset, or a related domain dataset, to retrieve representative samples.
We demonstrate that data from a related domain can be leveraged to achieve state-of-the-art performance.
arXiv Detail & Related papers (2019-12-27T02:05:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.