Grouped Adaptive Loss Weighting for Person Search
- URL: http://arxiv.org/abs/2209.11492v1
- Date: Fri, 23 Sep 2022 09:32:54 GMT
- Title: Grouped Adaptive Loss Weighting for Person Search
- Authors: Yanling Tian and Di Chen and Yunan Liu and Shanshan Zhang and Jian
Yang
- Abstract summary: Person search is a typical multi-task learning problem, especially when solved in an end-to-end manner.
We propose a Grouped Adaptive Loss Weighting (GALW) method which adjusts the weight of each task automatically and dynamically.
- Score: 44.713344415358414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Person search is an integrated task of multiple sub-tasks such as
foreground/background classification, bounding box regression and person
re-identification. Therefore, person search is a typical multi-task learning
problem, especially when solved in an end-to-end manner. Recently, some works
enhance person search features by exploiting various auxiliary information,
e.g. person joint keypoints, body part position, attributes, etc., which brings
in more tasks and further complexifies a person search model. The inconsistent
convergence rate of each task could potentially harm the model optimization. A
straightforward solution is to manually assign different weights to different
tasks, compensating for the diverse convergence rates. However, given the
special case of person search, i.e. with a large number of tasks, it is
impractical to weight the tasks manually. To this end, we propose a Grouped
Adaptive Loss Weighting (GALW) method which adjusts the weight of each task
automatically and dynamically. Specifically, we group tasks according to their
convergence rates. Tasks within the same group share the same learnable weight,
which is dynamically assigned by considering the loss uncertainty. Experimental
results on two typical benchmarks, CUHK-SYSU and PRW, demonstrate the
effectiveness of our method.
Related papers
- Localizing Task Information for Improved Model Merging and Compression [61.16012721460561]
We show that the information required to solve each task is still preserved after merging as different tasks mostly use non-overlapping sets of weights.
We propose Consensus Merging, an algorithm that eliminates such weights and improves the general performance of existing model merging approaches.
arXiv Detail & Related papers (2024-05-13T14:54:37Z) - Merging Multi-Task Models via Weight-Ensembling Mixture of Experts [64.94129594112557]
Merging Transformer-based models trained on different tasks into a single unified model can execute all the tasks concurrently.
Previous methods, exemplified by task arithmetic, have been proven to be both effective and scalable.
We propose to merge most of the parameters while upscaling the Transformer layers to a weight-ensembling mixture of experts (MoE) module.
arXiv Detail & Related papers (2024-02-01T08:58:57Z) - Diversity-Based Recruitment in Crowdsensing By Combinatorial Multi-Armed
Bandits [6.802315212233411]
This paper explores mobile crowdsensing, which leverages mobile devices and their users for collective sensing tasks under the coordination of a central requester.
The primary challenge here is the variability in the sensing capabilities of individual workers, which are initially unknown and must be progressively learned.
We propose a novel model that enhances task diversity over the rounds by dynamically adjusting the weight of tasks in each round based on their frequency of assignment.
arXiv Detail & Related papers (2023-12-25T13:54:58Z) - Task Difficulty Aware Parameter Allocation & Regularization for Lifelong
Learning [20.177260510548535]
We propose the Allocation & Regularization (PAR), which adaptively select an appropriate strategy for each task from parameter allocation and regularization based on its learning difficulty.
Our method is scalable and significantly reduces the model's redundancy while improving the model's performance.
arXiv Detail & Related papers (2023-04-11T15:38:21Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z) - Instance-Level Task Parameters: A Robust Multi-task Weighting Framework [17.639472693362926]
Recent works have shown that deep neural networks benefit from multi-task learning by learning a shared representation across several related tasks.
We let the training process dictate the optimal weighting of tasks for every instance in the dataset.
We conduct extensive experiments on SURREAL and CityScapes datasets, for human shape and pose estimation, depth estimation and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-11T02:35:42Z) - SpotPatch: Parameter-Efficient Transfer Learning for Mobile Object
Detection [39.29286021100541]
Deep learning based object detectors are commonly deployed on mobile devices to solve a variety of tasks.
For maximum accuracy, each detector is usually trained to solve one single task, and comes with a completely independent set of parameters.
This paper addresses the question: can task-specific detectors be trained and represented as a shared set of weights, plus a very small set of additional weights for each task?
arXiv Detail & Related papers (2021-01-04T22:24:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.