Attribute Selection using Contranominal Scales
- URL: http://arxiv.org/abs/2106.10978v1
- Date: Mon, 21 Jun 2021 10:53:50 GMT
- Title: Attribute Selection using Contranominal Scales
- Authors: Dominik D\"urrschnabel, Maren Koyda, Gerd Stumme
- Abstract summary: Formal Concept Analysis (FCA) allows to analyze binary data by deriving concepts and ordering them in lattices.
The size of such a lattice depends on the number of subcontexts in the corresponding formal context.
We propose the algorithm ContraFinder that enables the computation of all contranominal scales of a given formal context.
- Score: 0.09668407688201358
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Formal Concept Analysis (FCA) allows to analyze binary data by deriving
concepts and ordering them in lattices. One of the main goals of FCA is to
enable humans to comprehend the information that is encapsulated in the data;
however, the large size of concept lattices is a limiting factor for the
feasibility of understanding the underlying structural properties. The size of
such a lattice depends on the number of subcontexts in the corresponding formal
context that are isomorphic to a contranominal scale of high dimension. In this
work, we propose the algorithm ContraFinder that enables the computation of all
contranominal scales of a given formal context. Leveraging this algorithm, we
introduce delta-adjusting, a novel approach in order to decrease the number of
contranominal scales in a formal context by the selection of an appropriate
attribute subset. We demonstrate that delta-adjusting a context reduces the
size of the hereby emerging sub-semilattice and that the implication set is
restricted to meaningful implications. This is evaluated with respect to its
associated knowledge by means of a classification task. Hence, our proposed
technique strongly improves understandability while preserving important
conceptual structures.
Related papers
- Semantic Loss Functions for Neuro-Symbolic Structured Prediction [74.18322585177832]
We discuss the semantic loss, which injects knowledge about such structure, defined symbolically, into training.
It is agnostic to the arrangement of the symbols, and depends only on the semantics expressed thereby.
It can be combined with both discriminative and generative neural models.
arXiv Detail & Related papers (2024-05-12T22:18:25Z) - Quantization of Large Language Models with an Overdetermined Basis [73.79368761182998]
We introduce an algorithm for data quantization based on the principles of Kashin representation.
Our findings demonstrate that Kashin Quantization achieves competitive or superior quality in model performance.
arXiv Detail & Related papers (2024-04-15T12:38:46Z) - Counterfactual Explanations for Graph Classification Through the Lenses
of Density [19.53018353016675]
We define a general density-based counterfactual search framework to generate instance-level counterfactual explanations for graph classifiers.
We show two specific instantiations of this general framework: a method that searches for counterfactual graphs by opening or closing triangles, and a method driven by maximal cliques.
We evaluate the effectiveness of our approaches in 7 brain network datasets and compare the counterfactual statements generated according to several widely-used metrics.
arXiv Detail & Related papers (2023-07-27T13:28:18Z) - From Robustness to Explainability and Back Again [0.685316573653194]
The paper addresses the limitation of scalability of formal explainability, and proposes novel algorithms for computing formal explanations.
The proposed algorithm computes explanations by answering instead a number of robustness queries, and such that the number of such queries is at most linear on the number of features.
The experiments validate the practical efficiency of the proposed approach.
arXiv Detail & Related papers (2023-06-05T17:21:05Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Quantification and Aggregation over Concepts of the Ontology [0.0]
We argue that in some KR applications, we want to quantify over sets of concepts formally represented by symbols in the vocabulary.
We present an extension of first-order logic to support such abstractions, and show that it allows writing expressions of knowledge that are elaboration tolerant.
arXiv Detail & Related papers (2022-02-02T07:49:23Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z) - Generalising Recursive Neural Models by Tensor Decomposition [12.069862650316262]
We introduce a general approach to model aggregation of structural context leveraging a tensor-based formulation.
We show how the exponential growth in the size of the parameter space can be controlled through an approximation based on the Tucker decomposition.
By this means, we can effectively regulate the trade-off between expressivity of the encoding, controlled by the hidden size, computational complexity and model generalisation.
arXiv Detail & Related papers (2020-06-17T17:28:19Z) - Learning Discrete Structured Representations by Adversarially Maximizing
Mutual Information [39.87273353895564]
We propose learning discrete structured representations from unlabeled data by maximizing the mutual information between a structured latent variable and a target variable.
Our key technical contribution is an adversarial objective that can be used to tractably estimate mutual information assuming only the feasibility of cross entropy calculation.
We apply our model on document hashing and show that it outperforms current best baselines based on discrete and vector quantized variational autoencoders.
arXiv Detail & Related papers (2020-04-08T13:31:53Z) - A General Framework for Consistent Structured Prediction with Implicit
Loss Embeddings [113.15416137912399]
We propose and analyze a novel theoretical and algorithmic framework for structured prediction.
We study a large class of loss functions that implicitly defines a suitable geometry on the problem.
When dealing with output spaces with infinite cardinality, a suitable implicit formulation of the estimator is shown to be crucial.
arXiv Detail & Related papers (2020-02-13T10:30:04Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.