Massive Choice, Ample Tasks (MaChAmp): A Toolkit for Multi-task Learning
in NLP
- URL: http://arxiv.org/abs/2005.14672v4
- Date: Thu, 11 Mar 2021 16:30:11 GMT
- Title: Massive Choice, Ample Tasks (MaChAmp): A Toolkit for Multi-task Learning
in NLP
- Authors: Rob van der Goot, Ahmet \"Ust\"un, Alan Ramponi, Ibrahim Sharaf,
Barbara Plank
- Abstract summary: MaChAmp is a toolkit for easy fine-tuning of contextualized embeddings in multi-task settings.
The benefits of MaChAmp are its flexible configuration options, and the support of a variety of natural language processing tasks in a uniform toolkit.
- Score: 24.981991538150584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transfer learning, particularly approaches that combine multi-task learning
with pre-trained contextualized embeddings and fine-tuning, have advanced the
field of Natural Language Processing tremendously in recent years. In this
paper we present MaChAmp, a toolkit for easy fine-tuning of contextualized
embeddings in multi-task settings. The benefits of MaChAmp are its flexible
configuration options, and the support of a variety of natural language
processing tasks in a uniform toolkit, from text classification and sequence
labeling to dependency parsing, masked language modeling, and text generation.
Related papers
- Meta-Task Prompting Elicits Embeddings from Large Language Models [54.757445048329735]
We introduce a new unsupervised text embedding method, Meta-Task Prompting with Explicit One-Word Limitation.
We generate high-quality sentence embeddings from Large Language Models without the need for model fine-tuning.
Our findings suggest a new scaling law, offering a versatile and resource-efficient approach for embedding generation across diverse scenarios.
arXiv Detail & Related papers (2024-02-28T16:35:52Z) - Fine-tuning Large Language Models for Multigenerator, Multidomain, and
Multilingual Machine-Generated Text Detection [3.6433784431752434]
SemEval-2024 Task 8 introduces the challenge of identifying machine-generated texts from diverse Large Language Models (LLMs)
The task comprises three subtasks: binary classification in monolingual and multilingual (Subtask A), multi-class classification (Subtask B), and mixed text detection (Subtask C)
arXiv Detail & Related papers (2024-01-22T19:39:05Z) - MPrompt: Exploring Multi-level Prompt Tuning for Machine Reading
Comprehension [19.12663587559988]
We propose a multi-level prompt tuning (MPrompt) method for machine reading comprehension.
It utilizes prompts at task-specific, domain-specific, and context-specific levels to enhance the comprehension of input semantics.
We conducted extensive experiments on 12 benchmarks of various QA formats and achieved an average improvement of 1.94% over the state-of-the-art methods.
arXiv Detail & Related papers (2023-10-27T14:24:06Z) - Making Small Language Models Better Multi-task Learners with
Mixture-of-Task-Adapters [13.6682552098234]
Large Language Models (LLMs) have achieved amazing zero-shot learning performance over a variety of Natural Language Processing (NLP) tasks.
We present ALTER, a system that effectively builds the multi-tAsk learners with mixTure-of-task-adaptERs upon small language models.
A two-stage training method is proposed to optimize the collaboration between adapters at a small computational cost.
arXiv Detail & Related papers (2023-09-20T03:39:56Z) - ART: Automatic multi-step reasoning and tool-use for large language
models [105.57550426609396]
Large language models (LLMs) can perform complex reasoning in few- and zero-shot settings.
Each reasoning step can rely on external tools to support computation beyond the core LLM capabilities.
We introduce Automatic Reasoning and Tool-use (ART), a framework that uses frozen LLMs to automatically generate intermediate reasoning steps as a program.
arXiv Detail & Related papers (2023-03-16T01:04:45Z) - Few-shot Multimodal Multitask Multilingual Learning [0.0]
We propose few-shot learning for a multimodal multitask multilingual (FM3) setting by adapting pre-trained vision and language models.
FM3 learns the most prominent tasks in the vision and language domains along with their intersections.
arXiv Detail & Related papers (2023-02-19T03:48:46Z) - Unified Multimodal Pre-training and Prompt-based Tuning for
Vision-Language Understanding and Generation [86.26522210882699]
We propose Unified multimodal pre-training for both Vision-Language understanding and generation.
The proposed UniVL is capable of handling both understanding tasks and generative tasks.
Our experiments show that there is a trade-off between understanding tasks and generation tasks while using the same model.
arXiv Detail & Related papers (2021-12-10T14:59:06Z) - XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation [80.18830380517753]
We develop a new task-agnostic distillation framework XtremeDistilTransformers.
We study the transferability of several source tasks, augmentation resources and model architecture for distillation.
arXiv Detail & Related papers (2021-06-08T17:49:33Z) - N-LTP: An Open-source Neural Language Technology Platform for Chinese [68.58732970171747]
textttN- is an open-source neural language technology platform supporting six fundamental Chinese NLP tasks.
textttN- adopts the multi-task framework by using a shared pre-trained model, which has the advantage of capturing the shared knowledge across relevant Chinese tasks.
arXiv Detail & Related papers (2020-09-24T11:45:39Z) - Hierarchical Multi Task Learning with Subword Contextual Embeddings for
Languages with Rich Morphology [5.5217350574838875]
Morphological information is important for many sequence labeling tasks in Natural Language Processing (NLP)
We propose using subword contextual embeddings to capture morphological information for languages with rich morphology.
Our model outperforms previous state-of-the-art models on both tasks for the Turkish language.
arXiv Detail & Related papers (2020-04-25T22:55:56Z) - Learning to Multi-Task Learn for Better Neural Machine Translation [53.06405021125476]
Multi-task learning is an elegant approach to inject linguistic-related biases into neural machine translation models.
We propose a novel framework for learning the training schedule, ie learning to multi-task learn, for the biased-MTL setting of interest.
Experiments show the resulting automatically learned training schedulers are competitive with the best, and lead to up to +1.1 BLEU score improvements.
arXiv Detail & Related papers (2020-01-10T03:12:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.