Practical Assessment of Generalization Performance Robustness for Deep
Networks via Contrastive Examples
- URL: http://arxiv.org/abs/2106.10653v1
- Date: Sun, 20 Jun 2021 08:46:01 GMT
- Title: Practical Assessment of Generalization Performance Robustness for Deep
Networks via Contrastive Examples
- Authors: Xuanyu Wu, Xuhong Li, Haoyi Xiong, Xiao Zhang, Siyu Huang, Dejing Dou
- Abstract summary: Training images with data transformations have been suggested as contrastive examples to complement the testing set for generalization performance evaluation of deep neural networks (DNNs)
In this work, we propose a practical framework ContRE that uses Contrastive examples for DNN geneRalization performance Estimation.
- Score: 36.50563671470897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training images with data transformations have been suggested as contrastive
examples to complement the testing set for generalization performance
evaluation of deep neural networks (DNNs). In this work, we propose a practical
framework ContRE (The word "contre" means "against" or "versus" in French.)
that uses Contrastive examples for DNN geneRalization performance Estimation.
Specifically, ContRE follows the assumption in contrastive learning that robust
DNN models with good generalization performance are capable of extracting a
consistent set of features and making consistent predictions from the same
image under varying data transformations. Incorporating with a set of
randomized strategies for well-designed data transformations over the training
set, ContRE adopts classification errors and Fisher ratios on the generated
contrastive examples to assess and analyze the generalization performance of
deep models in complement with a testing set. To show the effectiveness and the
efficiency of ContRE, extensive experiments have been done using various DNN
models on three open source benchmark datasets with thorough ablation studies
and applicability analyses. Our experiment results confirm that (1) behaviors
of deep models on contrastive examples are strongly correlated to what on the
testing set, and (2) ContRE is a robust measure of generalization performance
complementing to the testing set in various settings.
Related papers
- Towards Robust Out-of-Distribution Generalization: Data Augmentation and Neural Architecture Search Approaches [4.577842191730992]
We study ways toward robust OoD generalization for deep learning.
We first propose a novel and effective approach to disentangle the spurious correlation between features that are not essential for recognition.
We then study the problem of strengthening neural architecture search in OoD scenarios.
arXiv Detail & Related papers (2024-10-25T20:50:32Z) - Cross-functional Analysis of Generalisation in Behavioural Learning [4.0810783261728565]
We introduce BeLUGA, an analysis method for evaluating behavioural learning considering generalisation across dimensions of different levels.
An aggregate score measures generalisation to unseen functionalities (or overfitting)
arXiv Detail & Related papers (2023-05-22T11:54:19Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Contrastive variational information bottleneck for aspect-based
sentiment analysis [36.83876224466177]
We propose to reduce spurious correlations for aspect-based sentiment analysis (ABSA) via a novel Contrastive Variational Information Bottleneck framework (called CVIB)
The proposed CVIB framework is composed of an original network and a self-pruned network, and these two networks are optimized simultaneously via contrastive learning.
Our approach achieves better performance than the strong competitors in terms of overall prediction performance, robustness, and generalization.
arXiv Detail & Related papers (2023-03-06T02:52:37Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - Rethinking Prototypical Contrastive Learning through Alignment,
Uniformity and Correlation [24.794022951873156]
We propose to learn Prototypical representation through Alignment, Uniformity and Correlation (PAUC)
Specifically, the ordinary ProtoNCE loss is revised with: (1) an alignment loss that pulls embeddings from positive prototypes together; (2) a loss that distributes the prototypical level features uniformly; (3) a correlation loss that increases the diversity and discriminability between prototypical level features.
arXiv Detail & Related papers (2022-10-18T22:33:12Z) - A Cognitive Study on Semantic Similarity Analysis of Large Corpora: A
Transformer-based Approach [0.0]
We perform semantic similarity analysis and modeling on the U.S. Patent Phrase to Phrase Matching dataset using both traditional and transformer-based techniques.
The experimental results demonstrate our methodology's enhanced performance compared to traditional techniques, with an average Pearson correlation score of 0.79.
arXiv Detail & Related papers (2022-07-24T11:06:56Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Leveraging Expert Guided Adversarial Augmentation For Improving
Generalization in Named Entity Recognition [50.85774164546487]
Named Entity Recognition (NER) systems often demonstrate great performance on in-distribution data, but perform poorly on examples drawn from a shifted distribution.
We propose leveraging expert-guideds to change the entity tokens and their surrounding contexts thereby altering their entity types as adversarial attacks.
We found that state-of-the-art NER systems trained on CoNLL 2003 training data drop performance dramatically on our challenging set.
arXiv Detail & Related papers (2022-03-21T01:21:12Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.