Are there any 'object detectors' in the hidden layers of CNNs trained to
identify objects or scenes?
- URL: http://arxiv.org/abs/2007.01062v1
- Date: Thu, 2 Jul 2020 12:33:37 GMT
- Title: Are there any 'object detectors' in the hidden layers of CNNs trained to
identify objects or scenes?
- Authors: Ella M. Gale and Nicholas Martin and Ryan Blything and Anh Nguyen and
Jeffrey S. Bowers
- Abstract summary: We compare various measures on a large set of units in AlexNet.
We find that the different measures provide different estimates of object selectivity.
We fail to find any units that are even remotely as selective as the 'grandmother cell' units reported in recurrent neural networks.
- Score: 5.718442081858377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various methods of measuring unit selectivity have been developed with the
aim of better understanding how neural networks work. But the different
measures provide divergent estimates of selectivity, and this has led to
different conclusions regarding the conditions in which selective object
representations are learned and the functional relevance of these
representations. In an attempt to better characterize object selectivity, we
undertake a comparison of various selectivity measures on a large set of units
in AlexNet, including localist selectivity, precision, class-conditional mean
activity selectivity (CCMAS), network dissection,the human interpretation of
activation maximization (AM) images, and standard signal-detection measures. We
find that the different measures provide different estimates of object
selectivity, with precision and CCMAS measures providing misleadingly high
estimates. Indeed, the most selective units had a poor hit-rate or a high
false-alarm rate (or both) in object classification, making them poor object
detectors. We fail to find any units that are even remotely as selective as the
'grandmother cell' units reported in recurrent neural networks. In order to
generalize these results, we compared selectivity measures on units in VGG-16
and GoogLeNet trained on the ImageNet or Places-365 datasets that have been
described as 'object detectors'. Again, we find poor hit-rates and high
false-alarm rates for object classification. We conclude that signal-detection
measures provide a better assessment of single-unit selectivity compared to
common alternative approaches, and that deep convolutional networks of image
classification do not learn object detectors in their hidden layers.
Related papers
- Few-Shot Object Detection with Sparse Context Transformers [37.106378859592965]
Few-shot detection is a major task in pattern recognition which seeks to localize objects using models trained with few labeled data.
We propose a novel sparse context transformer (SCT) that effectively leverages object knowledge in the source domain, and automatically learns a sparse context from only few training images in the target domain.
We evaluate the proposed method on two challenging few-shot object detection benchmarks, and empirical results show that the proposed method obtains competitive performance compared to the related state-of-the-art.
arXiv Detail & Related papers (2024-02-14T17:10:01Z) - Classification Committee for Active Deep Object Detection [24.74839931613233]
We propose a classification committee for active deep object detection method.
The committee selects the most informative images according to their uncertainty values.
We show that our method outperforms the state-of-the-art active learning methods.
arXiv Detail & Related papers (2023-08-16T16:31:36Z) - Much Easier Said Than Done: Falsifying the Causal Relevance of Linear
Decoding Methods [1.3999481573773074]
Linear classifier probes identify highly selective units as the most important for network function.
In spite of the absence of ablation effects for selective neurons, linear decoding methods can be effectively used to interpret network function.
More specifically, we find that an interaction between selectivity and the average activity of the unit better predicts ablation performance deficits for groups of units in AlexNet, VGG16, MobileNetV2, and ResNet101.
arXiv Detail & Related papers (2022-11-08T16:43:02Z) - R(Det)^2: Randomized Decision Routing for Object Detection [64.48369663018376]
We propose a novel approach to combine decision trees and deep neural networks in an end-to-end learning manner for object detection.
To facilitate effective learning, we propose randomized decision routing with node selective and associative losses.
We name this approach as the randomized decision routing for object detection, abbreviated as R(Det)$2$.
arXiv Detail & Related papers (2022-04-02T07:54:58Z) - Uncertainty Aware Proposal Segmentation for Unknown Object Detection [13.249453757295083]
This paper proposes to exploit additional predictions of semantic segmentation models and quantifying its confidences.
We use object proposals generated by Region Proposal Network (RPN) and adapt distance aware uncertainty estimation of semantic segmentation.
The augmented object proposals are then used to train a classifier for known vs. unknown objects categories.
arXiv Detail & Related papers (2021-11-25T01:53:05Z) - Discovery-and-Selection: Towards Optimal Multiple Instance Learning for
Weakly Supervised Object Detection [86.86602297364826]
We propose a discoveryand-selection approach fused with multiple instance learning (DS-MIL)
Our proposed DS-MIL approach can consistently improve the baselines, reporting state-of-the-art performance.
arXiv Detail & Related papers (2021-10-18T07:06:57Z) - Multi-Source Domain Adaptation for Object Detection [52.87890831055648]
We propose a unified Faster R-CNN based framework, termed Divide-and-Merge Spindle Network (DMSN)
DMSN can simultaneously enhance domain innative and preserve discriminative power.
We develop a novel pseudo learning algorithm to approximate optimal parameters of pseudo target subset.
arXiv Detail & Related papers (2021-06-30T03:17:20Z) - Aligning Pretraining for Detection via Object-Level Contrastive Learning [57.845286545603415]
Image-level contrastive representation learning has proven to be highly effective as a generic model for transfer learning.
We argue that this could be sub-optimal and thus advocate a design principle which encourages alignment between the self-supervised pretext task and the downstream task.
Our method, called Selective Object COntrastive learning (SoCo), achieves state-of-the-art results for transfer performance on COCO detection.
arXiv Detail & Related papers (2021-06-04T17:59:52Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z) - On the relationship between class selectivity, dimensionality, and
robustness [25.48362370177062]
We investigate whether class selectivity confers robustness (or vulnerability) to perturbations of input data.
We found that mean class selectivity predicts vulnerability to naturalistic corruptions.
We found that class selectivity increases robustness to multiple types of gradient-based adversarial attacks.
arXiv Detail & Related papers (2020-07-08T21:24:45Z) - Scope Head for Accurate Localization in Object Detection [135.9979405835606]
We propose a novel detector coined as ScopeNet, which models anchors of each location as a mutually dependent relationship.
With our concise and effective design, the proposed ScopeNet achieves state-of-the-art results on COCO.
arXiv Detail & Related papers (2020-05-11T04:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.