MetaMorphosis: Task-oriented Privacy Cognizant Feature Generation for
Multi-task Learning
- URL: http://arxiv.org/abs/2305.07815v1
- Date: Sat, 13 May 2023 01:59:07 GMT
- Title: MetaMorphosis: Task-oriented Privacy Cognizant Feature Generation for
Multi-task Learning
- Authors: Md Adnan Arefeen, Zhouyu Li, Md Yusuf Sarwar Uddin, Anupam Das
- Abstract summary: This paper proposes a novel deep learning-based privacy-cognizant feature generation process called MetaMorphosis.
We show that MetaMorphosis outperforms recent adversarial learning and universal feature generation methods by guaranteeing privacy requirements.
- Score: 6.056197449765416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growth of computer vision applications, deep learning, and edge
computing contribute to ensuring practical collaborative intelligence (CI) by
distributing the workload among edge devices and the cloud. However, running
separate single-task models on edge devices is inefficient regarding the
required computational resource and time. In this context, multi-task learning
allows leveraging a single deep learning model for performing multiple tasks,
such as semantic segmentation and depth estimation on incoming video frames.
This single processing pipeline generates common deep features that are shared
among multi-task modules. However, in a collaborative intelligence scenario,
generating common deep features has two major issues. First, the deep features
may inadvertently contain input information exposed to the downstream modules
(violating input privacy). Second, the generated universal features expose a
piece of collective information than what is intended for a certain task, in
which features for one task can be utilized to perform another task (violating
task privacy). This paper proposes a novel deep learning-based
privacy-cognizant feature generation process called MetaMorphosis that limits
inference capability to specific tasks at hand. To achieve this, we propose a
channel squeeze-excitation based feature metamorphosis module, Cross-SEC, to
achieve distinct attention of all tasks and a de-correlation loss function with
differential-privacy to train a deep learning model that produces distinct
privacy-aware features as an output for the respective tasks. With extensive
experimentation on four datasets consisting of diverse images related to scene
understanding and facial attributes, we show that MetaMorphosis outperforms
recent adversarial learning and universal feature generation methods by
guaranteeing privacy requirements in an efficient way for image and video
analytics.
Related papers
- A Multitask Deep Learning Model for Classification and Regression of Hyperspectral Images: Application to the large-scale dataset [44.94304541427113]
We propose a multitask deep learning model to perform multiple classification and regression tasks simultaneously on hyperspectral images.
We validated our approach on a large hyperspectral dataset called TAIGA.
A comprehensive qualitative and quantitative analysis of the results shows that the proposed method significantly outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-23T11:14:54Z) - TaskExpert: Dynamically Assembling Multi-Task Representations with
Memorial Mixture-of-Experts [11.608682595506354]
Recent models consider directly decoding task-specific features from one shared task-generic feature.
As the input feature is fully shared and each task decoder also shares decoding parameters for different input samples, it leads to a static feature decoding process.
We propose TaskExpert, a novel multi-task mixture-of-experts model that enables learning multiple representative task-generic feature spaces.
arXiv Detail & Related papers (2023-07-28T06:00:57Z) - Factorized Contrastive Learning: Going Beyond Multi-view Redundancy [116.25342513407173]
This paper proposes FactorCL, a new multimodal representation learning method to go beyond multi-view redundancy.
On large-scale real-world datasets, FactorCL captures both shared and unique information and achieves state-of-the-art results.
arXiv Detail & Related papers (2023-06-08T15:17:04Z) - A Dynamic Feature Interaction Framework for Multi-task Visual Perception [100.98434079696268]
We devise an efficient unified framework to solve multiple common perception tasks.
These tasks include instance segmentation, semantic segmentation, monocular 3D detection, and depth estimation.
Our proposed framework, termed D2BNet, demonstrates a unique approach to parameter-efficient predictions for multi-task perception.
arXiv Detail & Related papers (2023-06-08T09:24:46Z) - Sequential Cross Attention Based Multi-task Learning [22.430705836627148]
We propose a novel architecture that effectively transfers informative features by applying the attention mechanism to the multi-scale features of the tasks.
Our method achieves state-of-the-art performance on the NYUD-v2 and PASCAL-Context dataset.
arXiv Detail & Related papers (2022-09-06T14:17:33Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z) - MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning [82.62433731378455]
We show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales.
We propose a novel architecture, namely MTI-Net, that builds upon this finding.
arXiv Detail & Related papers (2020-01-19T21:02:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.