On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization
- URL: http://arxiv.org/abs/2307.08551v1
- Date: Mon, 17 Jul 2023 15:31:58 GMT
- Title: On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization
- Authors: Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, and Jihun Hamm
- Abstract summary: State-of-the-art domain generalization (DG) classifiers have shown impressive performance across various tasks.
But they have shown a bias towards domain-dependent information, such as image styles, rather than domain-invariant information, such as image content.
This bias renders them unreliable for deployment in risk-sensitive scenarios such as autonomous driving.
We propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time.
- Score: 25.618051317035164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Achieving high accuracy on data from domains unseen during training is a
fundamental challenge in domain generalization (DG). While state-of-the-art DG
classifiers have demonstrated impressive performance across various tasks, they
have shown a bias towards domain-dependent information, such as image styles,
rather than domain-invariant information, such as image content. This bias
renders them unreliable for deployment in risk-sensitive scenarios such as
autonomous driving where a misclassification could lead to catastrophic
consequences. To enable risk-averse predictions from a DG classifier, we
propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS),
that uses a "style-smoothed" version of the DG classifier for prediction at
test time. Specifically, the style-smoothed classifier classifies a test image
as the most probable class predicted by the DG classifier on random
re-stylizations of the test image. TT-NSS uses a neural style transfer module
to stylize a test image on the fly, requires only black-box access to the DG
classifier, and crucially, abstains when predictions of the DG classifier on
the stylized test images lack consensus. Additionally, we propose a neural
style smoothing (NSS) based training procedure that can be seamlessly
integrated with existing DG methods. This procedure enhances prediction
consistency, improving the performance of TT-NSS on non-abstained samples. Our
empirical results demonstrate the effectiveness of TT-NSS and NSS at producing
and improving risk-averse predictions on unseen domains from DG classifiers
trained with SOTA training methods on various benchmark datasets and their
variations.
Related papers
- Test-Time Domain Generalization for Face Anti-Spoofing [60.94384914275116]
Face Anti-Spoofing (FAS) is pivotal in safeguarding facial recognition systems against presentation attacks.
We introduce a novel Test-Time Domain Generalization framework for FAS, which leverages the testing data to boost the model's generalizability.
Our method, consisting of Test-Time Style Projection (TTSP) and Diverse Style Shifts Simulation (DSSS), effectively projects the unseen data to the seen domain space.
arXiv Detail & Related papers (2024-03-28T11:50:23Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Continual Test-time Domain Adaptation via Dynamic Sample Selection [38.82346845855512]
This paper proposes a Dynamic Sample Selection (DSS) method for Continual Test-time Domain Adaptation (CTDA)
We apply joint positive and negative learning on both high- and low-quality samples to reduce the risk of using wrong information.
Our approach is also evaluated in the 3D point cloud domain, showcasing its versatility and potential for broader applicability.
arXiv Detail & Related papers (2023-10-05T06:35:21Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Randomized Adversarial Style Perturbations for Domain Generalization [49.888364462991234]
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP)
The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains.
We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
arXiv Detail & Related papers (2023-04-04T17:07:06Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Invariant Content Synergistic Learning for Domain Generalization of
Medical Image Segmentation [13.708239594165061]
Deep convolution neural networks (DCNNs) often fail to maintain their robustness when confronting test data with the novel distribution.
In this paper, we propose a method, named Invariant Content Synergistic Learning (ICSL), to improve the generalization ability of DCNNs.
arXiv Detail & Related papers (2022-05-05T08:13:17Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.