A prototype-based model for set classification
- URL: http://arxiv.org/abs/2408.13720v1
- Date: Sun, 25 Aug 2024 04:29:18 GMT
- Title: A prototype-based model for set classification
- Authors: Mohammad Mohammadi, Sreejita Ghosh,
- Abstract summary: A common way to represent a set of vectors is to model them as linear subspaces.
We present a prototype-based approach for learning on the manifold formed from such linear subspaces, the Grassmann manifold.
- Score: 2.0564549686015594
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Classification of sets of inputs (e.g., images and texts) is an active area of research within both computer vision (CV) and natural language processing (NLP). A common way to represent a set of vectors is to model them as linear subspaces. In this contribution, we present a prototype-based approach for learning on the manifold formed from such linear subspaces, the Grassmann manifold. Our proposed method learns a set of subspace prototypes capturing the representative characteristics of classes and a set of relevance factors automating the selection of the dimensionality of the subspaces. This leads to a transparent classifier model which presents the computed impact of each input vector on its decision. Through experiments on benchmark image and text datasets, we have demonstrated the efficiency of our proposed classifier, compared to the transformer-based models in terms of not only performance and explainability but also computational resource requirements.
Related papers
- Analyzing the Benefits of Prototypes for Semi-Supervised Category Learning [3.5595258376041814]
We study the benefits of prototype-based representations in semi-supervised learning.
We show that forming prototypes can improve semi-supervised category learning.
arXiv Detail & Related papers (2024-06-04T12:47:11Z) - Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - Deciphering 'What' and 'Where' Visual Pathways from Spectral Clustering of Layer-Distributed Neural Representations [15.59251297818324]
We present an approach for analyzing grouping information contained within a neural network's activations.
We exploit features from all layers and obviating the need to guess which part of the model contains relevant information.
arXiv Detail & Related papers (2023-12-11T01:20:34Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Rethinking Person Re-identification from a Projection-on-Prototypes
Perspective [84.24742313520811]
Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous development over the past decade.
We propose a new baseline ProNet, which innovatively reserves the function of the classifier at the inference stage.
Experiments on four benchmarks demonstrate that our proposed ProNet is simple yet effective, and significantly beats previous baselines.
arXiv Detail & Related papers (2023-08-21T13:38:10Z) - Learning Support and Trivial Prototypes for Interpretable Image
Classification [19.00622056840535]
Prototypical part network (ProtoPNet) methods have been designed to achieve interpretable classification.
We aim to improve the classification of ProtoPNet with a new method to learn support prototypes that lie near the classification boundary in the feature space.
arXiv Detail & Related papers (2023-01-08T09:27:41Z) - Learning to Select Prototypical Parts for Interpretable Sequential Data
Modeling [7.376829794171344]
We propose a Self-Explaining Selective Model (SESM) that uses a linear combination of prototypical concepts to explain its own predictions.
For better interpretability, we design multiple constraints including diversity, stability, and locality as training objectives.
arXiv Detail & Related papers (2022-12-07T01:42:47Z) - Automatically Discovering Novel Visual Categories with Self-supervised
Prototype Learning [68.63910949916209]
This paper tackles the problem of novel category discovery (NCD), which aims to discriminate unknown categories in large-scale image collections.
We propose a novel adaptive prototype learning method consisting of two main stages: prototypical representation learning and prototypical self-training.
We conduct extensive experiments on four benchmark datasets and demonstrate the effectiveness and robustness of the proposed method with state-of-the-art performance.
arXiv Detail & Related papers (2022-08-01T16:34:33Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Linear Classifier Combination via Multiple Potential Functions [0.6091702876917279]
We propose a novel concept of calculating a scoring function based on the distance of the object from the decision boundary and its distance to the class centroid.
An important property is that the proposed score function has the same nature for all linear base classifiers.
arXiv Detail & Related papers (2020-10-02T08:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.