Neo: Generalizing Confusion Matrix Visualization to Hierarchical and
Multi-Output Labels
- URL: http://arxiv.org/abs/2110.12536v1
- Date: Sun, 24 Oct 2021 21:55:20 GMT
- Title: Neo: Generalizing Confusion Matrix Visualization to Hierarchical and
Multi-Output Labels
- Authors: Jochen G\"ortler, Fred Hohman, Dominik Moritz, Kanit Wongsuphasawat,
Donghao Ren, Rahul Nair, Marc Kirchner, Kayur Patel
- Abstract summary: The confusion matrix is a ubiquitous visualization for helping people evaluate machine learning models.
We find that conventional confusion matrices do not support more complex data-structures, such as hierarchical and multi-output labels.
We develop Neo, a visual analytics system that enables practitioners to flexibly author and interact with hierarchical and multi-output confusion matrices.
- Score: 25.336125962529692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The confusion matrix, a ubiquitous visualization for helping people evaluate
machine learning models, is a tabular layout that compares predicted class
labels against actual class labels over all data instances. We conduct
formative research with machine learning practitioners at a large technology
company and find that conventional confusion matrices do not support more
complex data-structures found in modern-day applications, such as hierarchical
and multi-output labels. To express such variations of confusion matrices, we
design an algebra that models confusion matrices as probability distributions.
Based on this algebra, we develop Neo, a visual analytics system that enables
practitioners to flexibly author and interact with hierarchical and
multi-output confusion matrices, visualize derived metrics, renormalize
confusions, and share matrix specifications. Finally, we demonstrate Neo's
utility with three case studies that help people better understand model
performance and reveal hidden confusions.
Related papers
- CoLA: Exploiting Compositional Structure for Automatic and Efficient
Numerical Linear Algebra [62.37017125812101]
We propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA.
By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms.
We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.
arXiv Detail & Related papers (2023-09-06T14:59:38Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Multi-Layer Attention-Based Explainability via Transformers for Tabular Data [11.866061471514582]
We propose a graph-oriented attention-based explainability method for tabular data.
We take into account the attention matrices of all heads and layers as a whole.
To assess the quality of multi-layer attention-based explanations, we compare them with popular attention-, gradient-, and perturbation-based explanability methods.
arXiv Detail & Related papers (2023-02-28T03:28:18Z) - Exploring ordered patterns in the adjacency matrix for improving machine
learning on complex networks [0.0]
The proposed methodology employs a sorting algorithm to rearrange the elements of the adjacency matrix of a complex graph in a specific order.
The resulting sorted adjacency matrix is then used as input for feature extraction and machine learning algorithms to classify the networks.
arXiv Detail & Related papers (2023-01-20T00:01:23Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - A Tutorial on the Spectral Theory of Markov Chains [0.0]
This tutorial provides an in-depth introduction to Markov chains.
We utilize tools from linear algebra and graph theory to describe the transition matrices of different types of Markov chains.
The results presented are relevant to a number of methods in machine learning and data mining.
arXiv Detail & Related papers (2022-07-05T20:43:40Z) - Graph Attention Transformer Network for Multi-Label Image Classification [50.0297353509294]
We propose a general framework for multi-label image classification that can effectively mine complex inter-label relationships.
Our proposed methods can achieve state-of-the-art performance on three datasets.
arXiv Detail & Related papers (2022-03-08T12:39:05Z) - Matrix Completion with Hierarchical Graph Side Information [39.00971122472004]
We consider a matrix completion problem that exploits social or item similarity graphs as side information.
We develop a universal, parameter-free, and computationally efficient algorithm that starts with hierarchical graph clustering.
We conduct extensive experiments on synthetic and real-world datasets to corroborate our theoretical results.
arXiv Detail & Related papers (2022-01-02T03:47:41Z) - A Deep Generative Model for Matrix Reordering [26.86727566323601]
We develop a generative model that learns a latent space of diverse matrix reorderings of a graph.
We construct an intuitive user interface from the learned latent space by creating a map of various matrix reorderings.
This paper introduces a fundamentally new approach to matrix visualization of a graph, where a machine learning model learns to generate diverse matrix reorderings of a graph.
arXiv Detail & Related papers (2021-10-11T02:55:24Z) - Vector-Matrix-Vector Queries for Solving Linear Algebra, Statistics, and
Graph Problems [58.83118651518438]
We consider the general problem of learning about a matrix through vector-matrix-vector queries.
These queries provide the value of $boldsymbolumathrmTboldsymbolMboldsymbolv$ over a fixed field.
We provide new upper and lower bounds for a wide variety of problems, spanning linear algebra, statistics, and graphs.
arXiv Detail & Related papers (2020-06-24T19:33:49Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.