Multivariate Data Explanation by Jumping Emerging Patterns Visualization
- URL: http://arxiv.org/abs/2106.11112v1
- Date: Mon, 21 Jun 2021 13:49:44 GMT
- Title: Multivariate Data Explanation by Jumping Emerging Patterns Visualization
- Authors: M\'ario Popolin Neto and Fernando V. Paulovich
- Abstract summary: We present VAX (multiVariate dAta eXplanation), a new VA method to support the identification and visual interpretation of patterns in multivariate data sets.
Unlike the existing similar approaches, VAX uses the concept of Jumping Emerging Patterns to identify and aggregate several diversified patterns, producing explanations through logic combinations of data variables.
- Score: 78.6363825307044
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Visual Analytics (VA) tools and techniques have shown to be instrumental in
supporting users to build better classification models, interpret model
decisions and audit results. In a different direction, VA has recently been
applied to transform classification models into descriptive mechanisms instead
of predictive. The idea is to use such models as surrogates for data patterns,
visualizing the model to understand the phenomenon represented by the data.
Although very useful and inspiring, the few proposed approaches have opted to
use low complex classification models to promote straightforward
interpretation, presenting limitations to capture intricate data patterns. In
this paper, we present VAX (multiVariate dAta eXplanation), a new VA method to
support the identification and visual interpretation of patterns in
multivariate data sets. Unlike the existing similar approaches, VAX uses the
concept of Jumping Emerging Patterns to identify and aggregate several
diversified patterns, producing explanations through logic combinations of data
variables. The potential of VAX to interpret complex multivariate datasets is
demonstrated through study-cases using two real-world data sets covering
different scenarios.
Related papers
- iGAiVA: Integrated Generative AI and Visual Analytics in a Machine Learning Workflow for Text Classification [2.0094862015890245]
We present a solution for using visual analytics (VA) to guide the generation of synthetic data using large language models.
We discuss different types of data deficiency, describe different VA techniques for supporting their identification, and demonstrate the effectiveness of targeted data synthesis.
arXiv Detail & Related papers (2024-09-24T08:19:45Z) - Self Supervised Correlation-based Permutations for Multi-View Clustering [7.972599673048582]
We propose an end-to-end deep learning-based MVC framework for general data.
Our approach involves learning meaningful fused data representations with a novel permutation-based canonical correlation objective.
We demonstrate the effectiveness of our model using ten MVC benchmark datasets.
arXiv Detail & Related papers (2024-02-26T08:08:30Z) - Revealing Multimodal Contrastive Representation Learning through Latent
Partial Causal Models [85.67870425656368]
We introduce a unified causal model specifically designed for multimodal data.
We show that multimodal contrastive representation learning excels at identifying latent coupled variables.
Experiments demonstrate the robustness of our findings, even when the assumptions are violated.
arXiv Detail & Related papers (2024-02-09T07:18:06Z) - Learning to Select Prototypical Parts for Interpretable Sequential Data
Modeling [7.376829794171344]
We propose a Self-Explaining Selective Model (SESM) that uses a linear combination of prototypical concepts to explain its own predictions.
For better interpretability, we design multiple constraints including diversity, stability, and locality as training objectives.
arXiv Detail & Related papers (2022-12-07T01:42:47Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Variational Interpretable Learning from Multi-view Data [2.687817337319978]
DICCA is designed to disentangle both the shared and view-specific variations for multi-view data.
Empirical results on real-world datasets show that our methods are competitive across domains.
arXiv Detail & Related papers (2022-02-28T01:56:44Z) - IMACS: Image Model Attribution Comparison Summaries [16.80986701058596]
We introduce IMACS, a method that combines gradient-based model attributions with aggregation and visualization techniques.
IMACS extracts salient input features from an evaluation dataset, clusters them based on similarity, then visualizes differences in model attributions for similar input features.
We show how our technique can uncover behavioral differences caused by domain shift between two models trained on satellite images.
arXiv Detail & Related papers (2022-01-26T21:35:14Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.