AdaMTL: Adaptive Input-dependent Inference for Efficient Multi-Task
Learning
- URL: http://arxiv.org/abs/2304.08594v1
- Date: Mon, 17 Apr 2023 20:17:44 GMT
- Title: AdaMTL: Adaptive Input-dependent Inference for Efficient Multi-Task
Learning
- Authors: Marina Neseem, Ahmed Agiza, Sherief Reda
- Abstract summary: We introduce AdaMTL, an adaptive framework that learns task-aware inference policies for multi-task learning models.
AdaMTL reduces the computational complexity by 43% while improving the accuracy by 1.32% compared to single-task models.
When deployed on Vuzix M4000 smart glasses, AdaMTL reduces the inference latency and the energy consumption by up to 21.8% and 37.5%, respectively.
- Score: 1.4963011898406864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern Augmented reality applications require performing multiple tasks on
each input frame simultaneously. Multi-task learning (MTL) represents an
effective approach where multiple tasks share an encoder to extract
representative features from the input frame, followed by task-specific
decoders to generate predictions for each task. Generally, the shared encoder
in MTL models needs to have a large representational capacity in order to
generalize well to various tasks and input data, which has a negative effect on
the inference latency. In this paper, we argue that due to the large variations
in the complexity of the input frames, some computations might be unnecessary
for the output. Therefore, we introduce AdaMTL, an adaptive framework that
learns task-aware inference policies for the MTL models in an input-dependent
manner. Specifically, we attach a task-aware lightweight policy network to the
shared encoder and co-train it alongside the MTL model to recognize unnecessary
computations. During runtime, our task-aware policy network decides which parts
of the model to activate depending on the input frame and the target
computational complexity. Extensive experiments on the PASCAL dataset
demonstrate that AdaMTL reduces the computational complexity by 43% while
improving the accuracy by 1.32% compared to single-task models. Combined with
SOTA MTL methodologies, AdaMTL boosts the accuracy by 7.8% while improving the
efficiency by 3.1X. When deployed on Vuzix M4000 smart glasses, AdaMTL reduces
the inference latency and the energy consumption by up to 21.8% and 37.5%,
respectively, compared to the static MTL model. Our code is publicly available
at https://github.com/scale-lab/AdaMTL.git.
Related papers
- AdapMTL: Adaptive Pruning Framework for Multitask Learning Model [5.643658120200373]
AdapMTL is an adaptive pruning framework for multitask models.
It balances sparsity allocation and accuracy performance across multiple tasks.
It showcases superior performance compared to state-of-the-art pruning methods.
arXiv Detail & Related papers (2024-08-07T17:19:15Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Latency-aware Unified Dynamic Networks for Efficient Image Recognition [72.8951331472913]
LAUDNet is a framework to bridge the theoretical and practical efficiency gap in dynamic networks.
It integrates three primary dynamic paradigms-spatially adaptive computation, dynamic layer skipping, and dynamic channel skipping.
It can notably reduce the latency of models like ResNet by over 50% on platforms such as V100,3090, and TX2 GPUs.
arXiv Detail & Related papers (2023-08-30T10:57:41Z) - Efficient Controllable Multi-Task Architectures [85.76598445904374]
We propose a multi-task model consisting of a shared encoder and task-specific decoders where both encoder and decoder channel widths are slimmable.
Our key idea is to control the task importance by varying the capacities of task-specific decoders, while controlling the total computational cost.
This improves overall accuracy by allowing a stronger encoder for a given budget, increases control over computational cost, and delivers high-quality slimmed sub-architectures.
arXiv Detail & Related papers (2023-08-22T19:09:56Z) - Deformable Mixer Transformer with Gating for Multi-Task Learning of
Dense Prediction [126.34551436845133]
CNNs and Transformers have their own advantages and both have been widely used for dense prediction in multi-task learning (MTL)
We present a novel MTL model by combining both merits of deformable CNN and query-based Transformer with shared gating for multi-task learning of dense prediction.
arXiv Detail & Related papers (2023-08-10T17:37:49Z) - Edge-MoE: Memory-Efficient Multi-Task Vision Transformer Architecture
with Task-level Sparsity via Mixture-of-Experts [60.1586169973792]
M$3$ViT is the latest multi-task ViT model that introduces mixture-of-experts (MoE)
MoE achieves better accuracy and over 80% reduction computation but leaves challenges for efficient deployment on FPGA.
Our work, dubbed Edge-MoE, solves the challenges to introduce the first end-to-end FPGA accelerator for multi-task ViT with a collection of architectural innovations.
arXiv Detail & Related papers (2023-05-30T02:24:03Z) - M$^3$ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task
Learning with Model-Accelerator Co-design [95.41238363769892]
Multi-task learning (MTL) encapsulates multiple learned tasks in a single model and often lets those tasks learn better jointly.
Current MTL regimes have to activate nearly the entire model even to just execute a single task.
We present a model-accelerator co-design framework to enable efficient on-device MTL.
arXiv Detail & Related papers (2022-10-26T15:40:24Z) - AutoMTL: A Programming Framework for Automated Multi-Task Learning [23.368860215515323]
Multi-task learning (MTL) jointly learns a set of tasks.
A major barrier preventing the widespread adoption of MTL is the lack of systematic support for developing compact multi-task models.
We develop the first programming framework AutoMTL that automates MTL model development.
arXiv Detail & Related papers (2021-10-25T16:13:39Z) - Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning
in NLP Using Fewer Parameters & Less Data [5.689320790746046]
Multi-Task Learning (MTL) networks have emerged as a promising method for transferring learned knowledge across different tasks.
However, MTL must deal with challenges such as: overfitting to low resource tasks, catastrophic forgetting, and negative task transfer.
We propose a novel Transformer architecture consisting of a new conditional attention mechanism and a set of task-conditioned modules.
arXiv Detail & Related papers (2020-09-19T02:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.