Rethinking Ensemble-Distillation for Semantic Segmentation Based
Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2104.14203v1
- Date: Thu, 29 Apr 2021 08:47:24 GMT
- Title: Rethinking Ensemble-Distillation for Semantic Segmentation Based
Unsupervised Domain Adaptation
- Authors: Chen-Hao Chao, Bo-Wun Cheng, Chun-Yi Lee
- Abstract summary: We propose a flexible ensemble-distillation framework for performing semantic segmentation based UDA.
Our framework is designed to be robust against the output inconsistency and the performance variation of the members within the ensemble.
- Score: 6.487749466672554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent researches on unsupervised domain adaptation (UDA) have demonstrated
that end-to-end ensemble learning frameworks serve as a compelling option for
UDA tasks. Nevertheless, these end-to-end ensemble learning methods often lack
flexibility as any modification to the ensemble requires retraining of their
frameworks. To address this problem, we propose a flexible
ensemble-distillation framework for performing semantic segmentation based UDA,
allowing any arbitrary composition of the members in the ensemble while still
maintaining its superior performance. To achieve such flexibility, our
framework is designed to be robust against the output inconsistency and the
performance variation of the members within the ensemble. To examine the
effectiveness and the robustness of our method, we perform an extensive set of
experiments on both GTA5 to Cityscapes and SYNTHIA to Cityscapes benchmarks to
quantitatively inspect the improvements achievable by our method. We further
provide detailed analyses to validate that our design choices are practical and
beneficial. The experimental evidence validates that the proposed method indeed
offer superior performance, robustness and flexibility in semantic segmentation
based UDA tasks against contemporary baseline methods.
Related papers
- Learning Dynamic Representations via An Optimally-Weighted Maximum Mean Discrepancy Optimization Framework for Continual Learning [10.142949909263846]
Continual learning allows models to persistently acquire and retain information.
catastrophic forgetting can severely impair model performance.
We introduce a novel framework termed Optimally-Weighted Mean Discrepancy (OWMMD), which imposes penalties on representation alterations.
arXiv Detail & Related papers (2025-01-21T13:33:45Z) - UniTTA: Unified Benchmark and Versatile Framework Towards Realistic Test-Time Adaptation [66.05528698010697]
Test-Time Adaptation aims to adapt pre-trained models to the target domain during testing.
Researchers have identified various challenging scenarios and developed diverse methods to address these challenges.
We propose a Unified Test-Time Adaptation benchmark, which is comprehensive and widely applicable.
arXiv Detail & Related papers (2024-07-29T15:04:53Z) - Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning [79.38140606606126]
We propose an algorithmic framework that fine-tunes vision-language models (VLMs) with reinforcement learning (RL)
Our framework provides a task description and then prompts the VLM to generate chain-of-thought (CoT) reasoning.
We demonstrate that our proposed framework enhances the decision-making capabilities of VLM agents across various tasks.
arXiv Detail & Related papers (2024-05-16T17:50:19Z) - Ensemble Distillation for Unsupervised Constituency Parsing [40.96887945888518]
We investigate the unsupervised constituency parsing task, which organizes words and phrases of a sentence into a hierarchical structure without using linguistically annotated data.
We propose a notion of "tree averaging," based on which we further propose a novel ensemble method for unsupervised parsing.
arXiv Detail & Related papers (2023-10-03T01:02:44Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - IDA: Informed Domain Adaptive Semantic Segmentation [51.12107564372869]
We propose an Domain Informed Adaptation (IDA) model, a self-training framework that mixes the data based on class-level segmentation performance.
In our IDA model, the class-level performance is tracked by an expected confidence score (ECS) and we then use a dynamic schedule to determine the mixing ratio for data in different domains.
Our proposed method is able to outperform the state-of-the-art UDA-SS method by a margin of 1.1 mIoU in the adaptation of GTA-V to Cityscapes and of 0.9 mIoU in the adaptation of SYNTHIA to City
arXiv Detail & Related papers (2023-03-05T18:16:34Z) - Improved Robustness Against Adaptive Attacks With Ensembles and
Error-Correcting Output Codes [0.0]
We investigate the robustness of Error-Correcting Output Codes (ECOC) ensembles through architectural improvements and ensemble diversity promotion.
We perform a comprehensive robustness assessment against adaptive attacks and investigate the relationship between ensemble diversity and robustness.
arXiv Detail & Related papers (2023-03-04T05:05:17Z) - Revisiting GANs by Best-Response Constraint: Perspective, Methodology,
and Application [49.66088514485446]
Best-Response Constraint (BRC) is a general learning framework to explicitly formulate the potential dependency of the generator on the discriminator.
We show that even with different motivations and formulations, a variety of existing GANs ALL can be uniformly improved by our flexible BRC methodology.
arXiv Detail & Related papers (2022-05-20T12:42:41Z) - A Regularized Implicit Policy for Offline Reinforcement Learning [54.7427227775581]
offline reinforcement learning enables learning from a fixed dataset, without further interactions with the environment.
We propose a framework that supports learning a flexible yet well-regularized fully-implicit policy.
Experiments and ablation study on the D4RL dataset validate our framework and the effectiveness of our algorithmic designs.
arXiv Detail & Related papers (2022-02-19T20:22:04Z) - Holistic Deep Learning [3.718942345103135]
This paper presents a novel holistic deep learning framework that addresses the challenges of vulnerability to input perturbations, overparametrization, and performance instability.
The proposed framework holistically improves accuracy, robustness, sparsity, and stability over standard deep learning models.
arXiv Detail & Related papers (2021-10-29T14:46:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.