Cascade Learning Localises Discriminant Features in Visual Scene
Classification
- URL: http://arxiv.org/abs/2311.12704v2
- Date: Thu, 30 Nov 2023 12:07:23 GMT
- Title: Cascade Learning Localises Discriminant Features in Visual Scene
Classification
- Authors: Junwen Wang and Katayoun Farrahi
- Abstract summary: We show that a layer-wise learning strategy, namely cascade learning (CL), results in more localised features.
Considering localisation accuracy, we not only show that CL outperforms E2E but that it is a promising method of predicting regions.
- Score: 2.8852708354106507
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lack of interpretability of deep convolutional neural networks (DCNN) is a
well-known problem particularly in the medical domain as clinicians want
trustworthy automated decisions. One way to improve trust is to demonstrate the
localisation of feature representations with respect to expert labeled regions
of interest. In this work, we investigate the localisation of features learned
via two varied learning paradigms and demonstrate the superiority of one
learning approach with respect to localisation. Our analysis on medical and
natural datasets show that the traditional end-to-end (E2E) learning strategy
has a limited ability to localise discriminative features across multiple
network layers. We show that a layer-wise learning strategy, namely cascade
learning (CL), results in more localised features. Considering localisation
accuracy, we not only show that CL outperforms E2E but that it is a promising
method of predicting regions. On the YOLO object detection framework, our best
result shows that CL outperforms the E2E scheme by $2\%$ in mAP.
Related papers
- Exploiting CLIP for Zero-shot HOI Detection Requires Knowledge
Distillation at Multiple Levels [52.50670006414656]
We employ CLIP, a large-scale pre-trained vision-language model, for knowledge distillation on multiple levels.
To train our model, CLIP is utilized to generate HOI scores for both global images and local union regions.
The model achieves strong performance, which is even comparable with some fully-supervised and weakly-supervised methods.
arXiv Detail & Related papers (2023-09-10T16:27:54Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Learning Consistency from High-quality Pseudo-labels for Weakly
Supervised Object Localization [7.602783618330373]
We propose a two-stage approach to learn more consistent localization.
In the first stage, we propose a mask-based pseudo label generator algorithm, and use the pseudo-supervised learning method to initialize an object localization network.
In the second stage, we propose a simple and effective method for evaluating the confidence of pseudo-labels based on classification discrimination.
arXiv Detail & Related papers (2022-03-18T09:05:51Z) - Region-Based Semantic Factorization in GANs [67.90498535507106]
We present a highly efficient algorithm to factorize the latent semantics learned by Generative Adversarial Networks (GANs) concerning an arbitrary image region.
Through an appropriately defined generalized Rayleigh quotient, we solve such a problem without any annotations or training.
Experimental results on various state-of-the-art GAN models demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-02-19T17:46:02Z) - GCA-Net : Utilizing Gated Context Attention for Improving Image Forgery
Localization and Detection [0.9883261192383611]
We propose a novel Gated Context Attention Network (GCA-Net) that utilizes the non-local attention block for global context learning.
We show that our method outperforms state-of-the-art networks by an average of 4.2%-5.4% AUC on multiple benchmark datasets.
arXiv Detail & Related papers (2021-12-08T14:13:14Z) - Enhancing Prototypical Few-Shot Learning by Leveraging the Local-Level
Strategy [75.63022284445945]
We find that the existing works often build their few-shot model based on the image-level feature by mixing all local-level features.
We present (a) a local-agnostic training strategy to avoid the discriminative location bias between the base and novel categories, and (b) a novel local-level similarity measure to capture the accurate comparison between local-level features.
arXiv Detail & Related papers (2021-11-08T08:45:15Z) - PGL: Prior-Guided Local Self-supervised Learning for 3D Medical Image
Segmentation [87.50205728818601]
We propose a PriorGuided Local (PGL) self-supervised model that learns the region-wise local consistency in the latent feature space.
Our PGL model learns the distinctive representations of local regions, and hence is able to retain structural information.
arXiv Detail & Related papers (2020-11-25T11:03:11Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z) - Contrastive learning of global and local features for medical image
segmentation with limited annotations [10.238403787504756]
A key requirement for the success of supervised deep learning is a large labeled dataset.
We propose strategies for extending the contrastive learning framework for segmentation of medical images in the semi-supervised setting.
In the limited annotation setting, the proposed method yields substantial improvements compared to other self-supervision and semi-supervised learning techniques.
arXiv Detail & Related papers (2020-06-18T13:31:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.