Stepwise functional refoundation of relational concept analysis
- URL: http://arxiv.org/abs/2310.06441v3
- Date: Tue, 9 Jan 2024 12:41:53 GMT
- Title: Stepwise functional refoundation of relational concept analysis
- Authors: J\'er\^ome Euzenat (MOEX )
- Abstract summary: concept analysis (RCA) is an extension of formal concept analysis allowing to deal with several related contexts simultaneously.
RCA returns a single family of concept lattices but when the data feature circular dependencies, other solutions may be considered acceptable.
We show that RCA returns the least element of the set of acceptable solutions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Relational concept analysis (RCA) is an extension of formal concept analysis
allowing to deal with several related contexts simultaneously. It has been
designed for learning description logic theories from data and used within
various applications. A puzzling observation about RCA is that it returns a
single family of concept lattices although, when the data feature circular
dependencies, other solutions may be considered acceptable. The semantics of
RCA, provided in an operational way, does not shed light on this issue. In this
report, we define these acceptable solutions as those families of concept
lattices which belong to the space determined by the initial contexts
(well-formed), cannot scale new attributes (saturated), and refer only to
concepts of the family (self-supported). We adopt a functional view on the RCA
process by defining the space of well-formed solutions and two functions on
that space: one expansive and the other contractive. We show that the
acceptable solutions are the common fixed points of both functions. This is
achieved step-by-step by starting from a minimal version of RCA that considers
only one single context defined on a space of contexts and a space of lattices.
These spaces are then joined into a single space of context-lattice pairs,
which is further extended to a space of indexed families of context-lattice
pairs representing the objects manippulated by RCA. We show that RCA returns
the least element of the set of acceptable solutions. In addition, it is
possible to build dually an operation that generates its greatest element. The
set of acceptable solutions is a complete sublattice of the interval between
these two elements. Its structure and how the defined functions traverse it are
studied in detail.
Related papers
- CORG: Generating Answers from Complex, Interrelated Contexts [57.213304718157985]
In a real-world corpus, knowledge frequently recurs across documents but often contains inconsistencies due to ambiguous naming, outdated information, or errors.
Previous research has shown that language models struggle with these complexities, typically focusing on single factors in isolation.
We introduce Context Organizer (CORG), a framework that organizes multiple contexts into independently processed groups.
arXiv Detail & Related papers (2025-04-25T02:40:48Z) - Structure-Aware Correspondence Learning for Relative Pose Estimation [65.44234975976451]
Relative pose estimation provides a promising way for achieving object-agnostic pose estimation.
Existing 3D correspondence-based methods suffer from small overlaps in visible regions and unreliable feature estimation for invisible regions.
We propose a novel Structure-Aware Correspondence Learning method for Relative Pose Estimation, which consists of two key modules.
arXiv Detail & Related papers (2025-03-24T13:43:44Z) - Clone-Resistant Weights in Metric Spaces: A Framework for Handling Redundancy Bias [23.27199615640474]
We are given a set of elements in a metric space.
The distribution of the elements is arbitrary, possibly adversarial.
Can we weigh the elements in a way that is resistant to such (adversarial) manipulations?
arXiv Detail & Related papers (2025-02-05T19:50:51Z) - Unifying Attribution-Based Explanations Using Functional Decomposition [1.8216507818880976]
We propose a unifying framework of attribution-based explanation methods.
It provides a step towards a rigorous study of the similarities and differences of explanations.
arXiv Detail & Related papers (2024-12-18T09:04:07Z) - Hierarchical Context Alignment with Disentangled Geometric and Temporal Modeling for Semantic Occupancy Prediction [61.484280369655536]
Camera-based 3D Semantic Occupancy Prediction (SOP) is crucial for understanding complex 3D scenes from limited 2D image observations.
Existing SOP methods typically aggregate contextual features to assist the occupancy representation learning.
We introduce a new Hierarchical context alignment paradigm for a more accurate SOP (Hi-SOP)
arXiv Detail & Related papers (2024-12-11T09:53:10Z) - Beyond Scalars: Concept-Based Alignment Analysis in Vision Transformers [10.400355814467401]
Vision transformers (ViTs) can be trained using various learning paradigms, from fully supervised to self-supervised.
We propose a concept-based alignment analysis of representations from four different ViTs.
The concept-based alignment analysis of representations from four different ViTs reveals that increased supervision correlates with a reduction in the semantic structure of learned representations.
arXiv Detail & Related papers (2024-12-09T16:33:28Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Spatial Semantic Recurrent Mining for Referring Image Segmentation [63.34997546393106]
We propose Stextsuperscript2RM to achieve high-quality cross-modality fusion.
It follows a working strategy of trilogy: distributing language feature, spatial semantic recurrent coparsing, and parsed-semantic balancing.
Our proposed method performs favorably against other state-of-the-art algorithms.
arXiv Detail & Related papers (2024-05-15T00:17:48Z) - Geometry-aware Reconstruction and Fusion-refined Rendering for Generalizable Neural Radiance Fields [18.474371929572918]
Generalizable NeRF aims to synthesize novel views for unseen scenes.
We introduce an Adaptive Cost Aggregation (ACA) approach to amplify the contribution of consistent pixel pairs.
We observe the two existing decoding strategies excel in different areas, which are complementary.
arXiv Detail & Related papers (2024-04-26T16:46:28Z) - A Geometric Notion of Causal Probing [91.14470073637236]
In a language model's representation space, all information about a concept such as verbal number is encoded in a linear subspace.
We give a set of intrinsic criteria which characterize an ideal linear concept subspace.
We find that LEACE returns a one-dimensional subspace containing roughly half of total concept information.
arXiv Detail & Related papers (2023-07-27T17:57:57Z) - Multi-dimensional concept discovery (MCD): A unifying framework with
completeness guarantees [1.9465727478912072]
We propose Multi-dimensional Concept Discovery (MCD) as an extension of previous approaches that fulfills a completeness relation on the level of concepts.
We empirically demonstrate the superiority of MCD against more constrained concept definitions.
arXiv Detail & Related papers (2023-01-27T18:53:19Z) - Concept Activation Regions: A Generalized Framework For Concept-Based
Explanations [95.94432031144716]
Existing methods assume that the examples illustrating a concept are mapped in a fixed direction of the deep neural network's latent space.
In this work, we propose allowing concept examples to be scattered across different clusters in the DNN's latent space.
This concept activation region (CAR) formalism yields global concept-based explanations and local concept-based feature importance.
arXiv Detail & Related papers (2022-09-22T17:59:03Z) - When are Post-hoc Conceptual Explanations Identifiable? [18.85180188353977]
When no human concept labels are available, concept discovery methods search trained embedding spaces for interpretable concepts.
We argue that concept discovery should be identifiable, meaning that a number of known concepts can be provably recovered to guarantee reliability of the explanations.
Our results highlight the strict conditions under which reliable concept discovery without human labels can be guaranteed.
arXiv Detail & Related papers (2022-06-28T10:21:17Z) - Barlow constrained optimization for Visual Question Answering [105.3372546782068]
We propose a novel regularization for VQA models, Constrained Optimization using Barlow's theory (COB)
Our model also aligns the joint space with the answer embedding space, where we consider the answer and image+question as two different views' of what in essence is the same semantic information.
When built on the state-of-the-art GGE model, the resulting model improves VQA accuracy by 1.4% and 4% on the VQA-CP v2 and VQA v2 datasets respectively.
arXiv Detail & Related papers (2022-03-07T21:27:40Z) - Implicit Bias of Projected Subgradient Method Gives Provable Robust
Recovery of Subspaces of Unknown Codimension [12.354076490479514]
We show that Dual Principal Component Pursuit (DPCP) can provably solve problems in the it unknown subspace dimension regime.
We propose a very simple algorithm based on running multiple instances of a projected sub-gradient descent method (PSGM)
In particular, we show that 1) all of the problem instances will converge to a vector in the nullspace of the subspace and 2) the ensemble of problem instance solutions will be sufficiently diverse to fully span the nullspace of the subspace.
arXiv Detail & Related papers (2022-01-22T15:36:03Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.