MultiCaM-Vis: Visual Exploration of Multi-Classification Model with High
Number of Classes
- URL: http://arxiv.org/abs/2309.05676v1
- Date: Sat, 9 Sep 2023 08:55:22 GMT
- Title: MultiCaM-Vis: Visual Exploration of Multi-Classification Model with High
Number of Classes
- Authors: Syed Ahsan Ali Dilawer, Shah Rukh Humayoun
- Abstract summary: We present our interactive visual analytics tool, called MultiCaM-Vis.
It provides Emphoverview+detail style parallel coordinate views and a Chord diagram for exploration and inspection of class-level miss-classification of instances.
- Score: 0.2900810893770134
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual exploration of multi-classification models with large number of
classes would help machine learning experts in identifying the root cause of a
problem that occurs during learning phase such as miss-classification of
instances. Most of the previous visual analytics solutions targeted only a few
classes. In this paper, we present our interactive visual analytics tool,
called MultiCaM-Vis, that provides \Emph{overview+detail} style parallel
coordinate views and a Chord diagram for exploration and inspection of
class-level miss-classification of instances. We also present results of a
preliminary user study with 12 participants.
Related papers
- Investigating Self-Supervised Methods for Label-Efficient Learning [27.029542823306866]
We study different self supervised pretext tasks, namely contrastive learning, clustering, and masked image modelling for their low-shot capabilities.
We introduce a framework involving both mask image modelling and clustering as pretext tasks, which performs better across all low-shot downstream tasks.
When testing the model on full scale datasets, we show performance gains in multi-class classification, multi-label classification and semantic segmentation.
arXiv Detail & Related papers (2024-06-25T10:56:03Z) - Toward Multi-class Anomaly Detection: Exploring Class-aware Unified Model against Inter-class Interference [67.36605226797887]
We introduce a Multi-class Implicit Neural representation Transformer for unified Anomaly Detection (MINT-AD)
By learning the multi-class distributions, the model generates class-aware query embeddings for the transformer decoder.
MINT-AD can project category and position information into a feature embedding space, further supervised by classification and prior probability loss functions.
arXiv Detail & Related papers (2024-03-21T08:08:31Z) - Circles: Inter-Model Comparison of Multi-Classification Problems with
High Number of Classes [0.24554686192257422]
We present our interactive visual analytics tool, called Circles, that allows a visual inter-model comparison of numerous classification models with 1K classes in one view.
Our prototype shows the results of 9 models with 1K classes.
arXiv Detail & Related papers (2023-09-08T19:39:46Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Elimination of Non-Novel Segments at Multi-Scale for Few-Shot
Segmentation [0.0]
Few-shot segmentation aims to devise a generalizing model that segments query images from unseen classes during training.
We simultaneously address two vital problems for the first time and achieve state-of-the-art performances on both PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2022-11-04T07:52:54Z) - Multi-Modal Few-Shot Object Detection with Meta-Learning-Based
Cross-Modal Prompting [77.69172089359606]
We study multi-modal few-shot object detection (FSOD) in this paper, using both few-shot visual examples and class semantic information for detection.
Our approach is motivated by the high-level conceptual similarity of (metric-based) meta-learning and prompt-based learning.
We comprehensively evaluate the proposed multi-modal FSOD models on multiple few-shot object detection benchmarks, achieving promising results.
arXiv Detail & Related papers (2022-04-16T16:45:06Z) - A Few-Shot Sequential Approach for Object Counting [63.82757025821265]
We introduce a class attention mechanism that sequentially attends to objects in the image and extracts their relevant features.
The proposed technique is trained on point-level annotations and uses a novel loss function that disentangles class-dependent and class-agnostic aspects of the model.
We present our results on a variety of object-counting/detection datasets, including FSOD and MS COCO.
arXiv Detail & Related papers (2020-07-03T18:23:39Z) - Generalized Multi-view Shared Subspace Learning using View Bootstrapping [43.027427742165095]
Key objective in multi-view learning is to model the information common to multiple parallel views of a class of objects/events to improve downstream learning tasks.
We present a neural method based on multi-view correlation to capture the information shared across a large number of views by subsampling them in a view-agnostic manner during training.
Experiments on spoken word recognition, 3D object classification and pose-invariant face recognition demonstrate the robustness of view bootstrapping to model a large number of views.
arXiv Detail & Related papers (2020-05-12T20:35:14Z) - DiVA: Diverse Visual Feature Aggregation for Deep Metric Learning [83.48587570246231]
Visual Similarity plays an important role in many computer vision applications.
Deep metric learning (DML) is a powerful framework for learning such similarities.
We propose and study multiple complementary learning tasks, targeting conceptually different data relationships.
We learn a single model to aggregate their training signals, resulting in strong generalization and state-of-the-art performance.
arXiv Detail & Related papers (2020-04-28T12:26:50Z) - Exploit Clues from Views: Self-Supervised and Regularized Learning for
Multiview Object Recognition [66.87417785210772]
This work investigates the problem of multiview self-supervised learning (MV-SSL)
A novel surrogate task for self-supervised learning is proposed by pursuing "object invariant" representation.
Experiments shows that the recognition and retrieval results using view invariant prototype embedding (VISPE) outperform other self-supervised learning methods.
arXiv Detail & Related papers (2020-03-28T07:06:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.