LatentDR: Improving Model Generalization Through Sample-Aware Latent
Degradation and Restoration
- URL: http://arxiv.org/abs/2308.14596v1
- Date: Mon, 28 Aug 2023 14:08:42 GMT
- Title: LatentDR: Improving Model Generalization Through Sample-Aware Latent
Degradation and Restoration
- Authors: Ran Liu, Sahil Khose, Jingyun Xiao, Lakshmi Sathidevi, Keerthan
Ramnath, Zsolt Kira, Eva L. Dyer
- Abstract summary: We propose a novel approach for distribution-aware latent augmentation.
Our approach first degrades the samples in the latent space, mapping them to augmented labels, and then restores the samples during training.
We show that our method can be flexibly adapted to long-tail recognition tasks, demonstrating its versatility in building more generalizable models.
- Score: 22.871920291497094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite significant advances in deep learning, models often struggle to
generalize well to new, unseen domains, especially when training data is
limited. To address this challenge, we propose a novel approach for
distribution-aware latent augmentation that leverages the relationships across
samples to guide the augmentation procedure. Our approach first degrades the
samples stochastically in the latent space, mapping them to augmented labels,
and then restores the samples from their corrupted versions during training.
This process confuses the classifier in the degradation step and restores the
overall class distribution of the original samples, promoting diverse
intra-class/cross-domain variability. We extensively evaluate our approach on a
diverse set of datasets and tasks, including domain generalization benchmarks
and medical imaging datasets with strong domain shift, where we show our
approach achieves significant improvements over existing methods for latent
space augmentation. We further show that our method can be flexibly adapted to
long-tail recognition tasks, demonstrating its versatility in building more
generalizable models. Code is available at
https://github.com/nerdslab/LatentDR.
Related papers
- Domain Expansion and Boundary Growth for Open-Set Single-Source Domain Generalization [70.02187124865627]
Open-set single-source domain generalization aims to use a single-source domain to learn a robust model that can be generalized to unknown target domains.
We propose a novel learning approach based on domain expansion and boundary growth to expand the scarce source samples.
Our approach can achieve significant improvements and reach state-of-the-art performance on several cross-domain image classification datasets.
arXiv Detail & Related papers (2024-11-05T09:08:46Z) - FDS: Feedback-guided Domain Synthesis with Multi-Source Conditional Diffusion Models for Domain Generalization [19.0284321951354]
Domain Generalization techniques aim to enhance model robustness by simulating novel data distributions during training.
We propose FDS, Feedback-guided Domain Synthesis, a novel strategy that employs diffusion models to synthesize novel, pseudo-domains.
Our evaluations demonstrate that this methodology sets new benchmarks in domain generalization performance across a range of challenging datasets.
arXiv Detail & Related papers (2024-07-04T02:45:29Z) - DiffClass: Diffusion-Based Class Incremental Learning [30.514281721324853]
Class Incremental Learning (CIL) is challenging due to catastrophic forgetting.
Recent exemplar-free CIL methods attempt to mitigate catastrophic forgetting by synthesizing previous task data.
We propose a novel exemplar-free CIL method to overcome these issues.
arXiv Detail & Related papers (2024-03-08T03:34:18Z) - Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - Target-Aware Generative Augmentations for Single-Shot Adaptation [21.840653627684855]
We propose a new approach to adapting models from a source domain to a target domain.
SiSTA fine-tunes a generative model from the source domain using a single-shot target, and then employs novel sampling strategies for curating synthetic target data.
We find that SiSTA produces significantly improved generalization over existing baselines in face detection and multi-class object recognition.
arXiv Detail & Related papers (2023-05-22T17:46:26Z) - Feature Diversity Learning with Sample Dropout for Unsupervised Domain
Adaptive Person Re-identification [0.0]
This paper proposes a new approach to learn the feature representation with better generalization ability through limiting noisy pseudo labels.
We put forward a brand-new method referred as to Feature Diversity Learning (FDL) under the classic mutual-teaching architecture.
Experimental results show that our proposed FDL-SD achieves the state-of-the-art performance on multiple benchmark datasets.
arXiv Detail & Related papers (2022-01-25T10:10:48Z) - Style Curriculum Learning for Robust Medical Image Segmentation [62.02435329931057]
Deep segmentation models often degrade due to distribution shifts in image intensities between the training and test data sets.
We propose a novel framework to ensure robust segmentation in the presence of such distribution shifts.
arXiv Detail & Related papers (2021-08-01T08:56:24Z) - Distance-based Hyperspherical Classification for Multi-source Open-Set
Domain Adaptation [34.97934677830779]
Vision systems trained in closed-world scenarios will inevitably fail when presented with new environmental conditions.
How to move towards open-world learning is a long standing research question.
In this work we tackle multi-source Open-Set domain adaptation by introducing HyMOS.
arXiv Detail & Related papers (2021-07-05T14:56:57Z) - Semi-Supervised Domain Generalization with Stochastic StyleMatch [90.98288822165482]
In real-world applications, we might have only a few labels available from each source domain due to high annotation cost.
In this work, we investigate semi-supervised domain generalization, a more realistic and practical setting.
Our proposed approach, StyleMatch, is inspired by FixMatch, a state-of-the-art semi-supervised learning method based on pseudo-labeling.
arXiv Detail & Related papers (2021-06-01T16:00:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.