On the Impact of Interpretability Methods in Active Image Augmentation
Method
- URL: http://arxiv.org/abs/2102.12354v1
- Date: Wed, 24 Feb 2021 15:40:54 GMT
- Title: On the Impact of Interpretability Methods in Active Image Augmentation
Method
- Authors: Flavio Santos, Cleber Zanchettin, Leonardo Matos, and Paulo Novais
- Abstract summary: We propose an experimental analysis of interpretability method's impact on ADA.
We use five interpretability methods: Vanilla Backpropagation, Guided Backpropagation, GradCam, Guided GradCam, and InputXGradient.
The results show that all methods achieve similar performance at the ending of training, but when combining ADA with GradCam, the U-Net model presented an impressive fast convergence.
- Score: 2.740398518066079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robustness is a significant constraint in machine learning models. The
performance of the algorithms must not deteriorate when training and testing
with slightly different data. Deep neural network models achieve awe-inspiring
results in a wide range of applications of computer vision. Still, in the
presence of noise or region occlusion, some models exhibit inaccurate
performance even with data handled in training. Besides, some experiments
suggest deep learning models sometimes use incorrect parts of the input
information to perform inference. Activate Image Augmentation (ADA) is an
augmentation method that uses interpretability methods to augment the training
data and improve its robustness to face the described problems. Although ADA
presented interesting results, its original version only used the Vanilla
Backpropagation interpretability to train the U-Net model. In this work, we
propose an extensive experimental analysis of the interpretability method's
impact on ADA. We use five interpretability methods: Vanilla Backpropagation,
Guided Backpropagation, GradCam, Guided GradCam, and InputXGradient. The
results show that all methods achieve similar performance at the ending of
training, but when combining ADA with GradCam, the U-Net model presented an
impressive fast convergence.
Related papers
- Make Prompts Adaptable: Bayesian Modeling for Vision-Language Prompt
Learning with Data-Dependent Prior [14.232144691524528]
Recent Vision-Language Pretrained models have become the backbone for many downstream tasks.
MLE training can lead the context vector to over-fit dominant image features in the training data.
This paper presents a Bayesian-based framework of prompt learning, which could alleviate the overfitting issues on few-shot learning application.
arXiv Detail & Related papers (2024-01-09T10:15:59Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - Texture-Based Input Feature Selection for Action Recognition [3.9596068699962323]
We propose a novel method to determine the task-irrelevant content in inputs which increases the domain discrepancy.
We show that our proposed model is superior to existing models for action recognition on the HMDB-51 dataset and the Penn Action dataset.
arXiv Detail & Related papers (2023-02-28T23:56:31Z) - Effective Robustness against Natural Distribution Shifts for Models with
Different Training Data [113.21868839569]
"Effective robustness" measures the extra out-of-distribution robustness beyond what can be predicted from the in-distribution (ID) performance.
We propose a new evaluation metric to evaluate and compare the effective robustness of models trained on different data.
arXiv Detail & Related papers (2023-02-02T19:28:41Z) - Leveraging Angular Information Between Feature and Classifier for
Long-tailed Learning: A Prediction Reformulation Approach [90.77858044524544]
We reformulate the recognition probabilities through included angles without re-balancing the classifier weights.
Inspired by the performance improvement of the predictive form reformulation, we explore the different properties of this angular prediction.
Our method is able to obtain the best performance among peer methods without pretraining on CIFAR10/100-LT and ImageNet-LT.
arXiv Detail & Related papers (2022-12-03T07:52:48Z) - CONVIQT: Contrastive Video Quality Estimator [63.749184706461826]
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms.
Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner.
Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
arXiv Detail & Related papers (2022-06-29T15:22:01Z) - Learning to Reweight with Deep Interactions [104.68509759134878]
We propose an improved data reweighting algorithm, in which the student model provides its internal states to the teacher model.
Experiments on image classification with clean/noisy labels and neural machine translation empirically demonstrate that our algorithm makes significant improvement over previous methods.
arXiv Detail & Related papers (2020-07-09T09:06:31Z) - Auto-Rectify Network for Unsupervised Indoor Depth Estimation [119.82412041164372]
We establish that the complex ego-motions exhibited in handheld settings are a critical obstacle for learning depth.
We propose a data pre-processing method that rectifies training images by removing their relative rotations for effective learning.
Our results outperform the previous unsupervised SOTA method by a large margin on the challenging NYUv2 dataset.
arXiv Detail & Related papers (2020-06-04T08:59:17Z) - PrIU: A Provenance-Based Approach for Incrementally Updating Regression
Models [9.496524884855559]
This paper presents an efficient provenance-based approach, PrIU, for incrementally updating model parameters without sacrificing prediction accuracy.
We prove the correctness and convergence of the incrementally updated model parameters, and validate it experimentally.
Experimental results show that up to two orders of magnitude speed-ups can be achieved by PrIU-opt compared to simply retraining the model from scratch, yet obtaining highly similar models.
arXiv Detail & Related papers (2020-02-26T21:04:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.