Entropy Guided Adversarial Model for Weakly Supervised Object
Localization
- URL: http://arxiv.org/abs/2008.01786v1
- Date: Tue, 4 Aug 2020 19:39:12 GMT
- Title: Entropy Guided Adversarial Model for Weakly Supervised Object
Localization
- Authors: Sabrina Narimene Benassou, Wuzhen Shi, Feng Jiang
- Abstract summary: We propose to apply the shannon entropy on the CAMs generated by the network to guide it during training.
Our method does not erase any part of the image neither does it change the network architecure.
Our Entropy Guided Adversarial model (EGA model) improved performance on state of the arts benchmarks for both localization and classification accuracy.
- Score: 11.77745060973134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Weakly Supervised Object Localization is challenging because of the lack of
bounding box annotations. Previous works tend to generate a class activation
map i.e CAM to localize the object. Unfortunately, the network activates only
the features that discriminate the object and does not activate the whole
object. Some methods tend to remove some parts of the object to force the CNN
to detect other features, whereas, others change the network structure to
generate multiple CAMs from different levels of the model. In this present
article, we propose to take advantage of the generalization ability of the
network and train the model using clean examples and adversarial examples to
localize the whole object. Adversarial examples are typically used to train
robust models and are images where a perturbation is added. To get a good
classification accuracy, the CNN trained with adversarial examples is forced to
detect more features that discriminate the object. We futher propose to apply
the shannon entropy on the CAMs generated by the network to guide it during
training. Our method does not erase any part of the image neither does it
change the network architecure and extensive experiments show that our Entropy
Guided Adversarial model (EGA model) improved performance on state of the arts
benchmarks for both localization and classification accuracy.
Related papers
- Robust Change Detection Based on Neural Descriptor Fields [53.111397800478294]
We develop an object-level online change detection approach that is robust to partially overlapping observations and noisy localization results.
By associating objects via shape code similarity and comparing local object-neighbor spatial layout, our proposed approach demonstrates robustness to low observation overlap and localization noises.
arXiv Detail & Related papers (2022-08-01T17:45:36Z) - Experience feedback using Representation Learning for Few-Shot Object
Detection on Aerial Images [2.8560476609689185]
The performance of our method is assessed on DOTA, a large-scale remote sensing images dataset.
It highlights in particular some intrinsic weaknesses for the few-shot object detection task.
arXiv Detail & Related papers (2021-09-27T13:04:53Z) - Object-aware Contrastive Learning for Debiased Scene Representation [74.30741492814327]
We develop a novel object-aware contrastive learning framework that localizes objects in a self-supervised manner.
We also introduce two data augmentations based on ContraCAM, object-aware random crop and background mixup, which reduce contextual and background biases during contrastive self-supervised learning.
arXiv Detail & Related papers (2021-07-30T19:24:07Z) - Instance Localization for Self-supervised Detection Pretraining [68.24102560821623]
We propose a new self-supervised pretext task, called instance localization.
We show that integration of bounding boxes into pretraining promotes better task alignment and architecture alignment for transfer learning.
Experimental results demonstrate that our approach yields state-of-the-art transfer learning results for object detection.
arXiv Detail & Related papers (2021-02-16T17:58:57Z) - Hierarchical Complementary Learning for Weakly Supervised Object
Localization [12.104019927107517]
Weakly supervised object localization (WSOL) is a challenging problem which aims to localize objects with only image-level labels.
This paper proposes a Hierarchical Complementary Learning Network method (HCLNet) that helps the CNN to perform better classification and localization of objects on the images.
arXiv Detail & Related papers (2020-11-16T14:58:51Z) - Synthesizing the Unseen for Zero-shot Object Detection [72.38031440014463]
We propose to synthesize visual features for unseen classes, so that the model learns both seen and unseen objects in the visual domain.
We use a novel generative model that uses class-semantics to not only generate the features but also to discriminatively separate them.
arXiv Detail & Related papers (2020-10-19T12:36:11Z) - Understanding the Role of Individual Units in a Deep Neural Network [85.23117441162772]
We present an analytic framework to systematically identify hidden units within image classification and image generation networks.
First, we analyze a convolutional neural network (CNN) trained on scene classification and discover units that match a diverse set of object concepts.
Second, we use a similar analytic method to analyze a generative adversarial network (GAN) model trained to generate scenes.
arXiv Detail & Related papers (2020-09-10T17:59:10Z) - Eigen-CAM: Class Activation Map using Principal Components [1.2691047660244335]
This paper builds on previous ideas to cope with the increasing demand for interpretable, robust, and transparent models.
The proposed Eigen-CAM computes and visualizes the principle components of the learned features/representations from the convolutional layers.
arXiv Detail & Related papers (2020-08-01T17:14:13Z) - Synthesizing Unrestricted False Positive Adversarial Objects Using
Generative Models [0.0]
Adversarial examples are data points misclassified by neural networks.
Recent work introduced the concept of unrestricted adversarial examples.
We introduce a new category of attacks that create unrestricted adversarial examples for object detection.
arXiv Detail & Related papers (2020-05-19T08:58:58Z) - Improving Few-shot Learning by Spatially-aware Matching and
CrossTransformer [116.46533207849619]
We study the impact of scale and location mismatch in the few-shot learning scenario.
We propose a novel Spatially-aware Matching scheme to effectively perform matching across multiple scales and locations.
arXiv Detail & Related papers (2020-01-06T14:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.