Ada-Segment: Automated Multi-loss Adaptation for Panoptic Segmentation
- URL: http://arxiv.org/abs/2012.03603v1
- Date: Mon, 7 Dec 2020 11:43:10 GMT
- Title: Ada-Segment: Automated Multi-loss Adaptation for Panoptic Segmentation
- Authors: Gengwei Zhang, Yiming Gao, Hang Xu, Hao Zhang, Zhenguo Li, Xiaodan
Liang
- Abstract summary: We propose an automated multi-loss adaptation (named Ada-Segment) to flexibly adjust multiple training losses over the course of training.
With an end-to-end architecture, Ada-Segment generalizes to different datasets without the need of re-tuning hyper parameters.
Ada-Segment brings 2.7% panoptic quality (PQ) improvement on COCO val split from the vanilla baseline, achieving the state-of-the-art 48.5% PQ on COCO test-dev split and 32.9% PQ on ADE20K dataset.
- Score: 95.31590177308482
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Panoptic segmentation that unifies instance segmentation and semantic
segmentation has recently attracted increasing attention. While most existing
methods focus on designing novel architectures, we steer toward a different
perspective: performing automated multi-loss adaptation (named Ada-Segment) on
the fly to flexibly adjust multiple training losses over the course of training
using a controller trained to capture the learning dynamics. This offers a few
advantages: it bypasses manual tuning of the sensitive loss combination, a
decisive factor for panoptic segmentation; it allows to explicitly model the
learning dynamics, and reconcile the learning of multiple objectives (up to ten
in our experiments); with an end-to-end architecture, it generalizes to
different datasets without the need of re-tuning hyperparameters or
re-adjusting the training process laboriously. Our Ada-Segment brings 2.7%
panoptic quality (PQ) improvement on COCO val split from the vanilla baseline,
achieving the state-of-the-art 48.5% PQ on COCO test-dev split and 32.9% PQ on
ADE20K dataset. The extensive ablation studies reveal the ever-changing
dynamics throughout the training process, necessitating the incorporation of an
automated and adaptive learning strategy as presented in this paper.
Related papers
- Subject Representation Learning from EEG using Graph Convolutional Variational Autoencoders [20.364067310176054]
GC-VASE is a graph convolutional-based variational autoencoder that leverages contrastive learning for subject representation learning from EEG data.
Our method successfully learns robust subject-specific latent representations using the split-latent space architecture tailored for subject identification.
arXiv Detail & Related papers (2025-01-13T17:29:31Z) - Towards Generalizable Trajectory Prediction Using Dual-Level Representation Learning And Adaptive Prompting [107.4034346788744]
Existing vehicle trajectory prediction models struggle with generalizability, prediction uncertainties, and handling complex interactions.
We propose Perceiver with Register queries (PerReg+), a novel trajectory prediction framework that introduces: (1) Dual-Level Representation Learning via Self-Distillation (SD) and Masked Reconstruction (MR), capturing global context and fine-grained details; (2) Enhanced Multimodality using register-based queries and pretraining, eliminating the need for clustering and suppression; and (3) Adaptive Prompt Tuning during fine-tuning, freezing the main architecture and optimizing a small number of prompts for efficient adaptation.
arXiv Detail & Related papers (2025-01-08T20:11:09Z) - Contrastive-Adversarial and Diffusion: Exploring pre-training and fine-tuning strategies for sulcal identification [3.0398616939692777]
Techniques like adversarial learning, contrastive learning, diffusion denoising learning, and ordinary reconstruction learning have become standard.
The study aims to elucidate the advantages of pre-training techniques and fine-tuning strategies to enhance the learning process of neural networks.
arXiv Detail & Related papers (2024-05-29T15:44:51Z) - Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters [65.15700861265432]
We present a parameter-efficient continual learning framework to alleviate long-term forgetting in incremental learning with vision-language models.
Our approach involves the dynamic expansion of a pre-trained CLIP model, through the integration of Mixture-of-Experts (MoE) adapters.
To preserve the zero-shot recognition capability of vision-language models, we introduce a Distribution Discriminative Auto-Selector.
arXiv Detail & Related papers (2024-03-18T08:00:23Z) - SLCA: Slow Learner with Classifier Alignment for Continual Learning on a
Pre-trained Model [73.80068155830708]
We present an extensive analysis for continual learning on a pre-trained model (CLPM)
We propose a simple but extremely effective approach named Slow Learner with Alignment (SLCA)
Across a variety of scenarios, our proposal provides substantial improvements for CLPM.
arXiv Detail & Related papers (2023-03-09T08:57:01Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - Dynamic Multi-Scale Loss Optimization for Object Detection [14.256807110937622]
We study the objective imbalance of multi-scale detector training.
We propose an Adaptive Variance Weighting (AVW) to balance multi-scale loss according to the statistical variance.
We develop a novel Reinforcement Learning Optimization (RLO) to decide the weighting scheme probabilistically during training.
arXiv Detail & Related papers (2021-08-09T13:12:41Z) - An EM Framework for Online Incremental Learning of Semantic Segmentation [37.94734474090863]
We propose an incremental learning strategy that can adapt deep segmentation models without catastrophic forgetting, using a streaming input data with pixel annotations on the novel classes only.
We validate our approach on the PASCAL VOC 2012 and ADE20K datasets, and the results demonstrate its superior performance over the existing incremental methods.
arXiv Detail & Related papers (2021-08-08T11:30:09Z) - Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic
Segmentation [79.42338812621874]
Adversarial training is promising for improving robustness of deep neural networks towards adversarial perturbations.
We formulate a general adversarial training procedure that can perform decently on both adversarial and clean samples.
We propose a dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect.
arXiv Detail & Related papers (2020-03-14T05:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.