Space of Reasons and Mathematical Model
- URL: http://arxiv.org/abs/2007.02489v1
- Date: Mon, 6 Jul 2020 01:13:43 GMT
- Title: Space of Reasons and Mathematical Model
- Authors: Florian Richter
- Abstract summary: Inferential relations govern our concept use.
In order to understand a concept it has to be located in a space of implications.
The crucial questions is: How can the conditionality of language use be represented.
- Score: 8.475081627511166
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inferential relations govern our concept use. In order to understand a
concept it has to be located in a space of implications. There are different
kinds of conditions for statements, i.e. that the conditions represent
different kinds of explanations, e.g. causal or conceptual explanations. The
crucial questions is: How can the conditionality of language use be
represented. The conceptual background of representation in models is discussed
and in the end I propose how implications of propositional logic and conceptual
determinations can be represented in a model of a neural network.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - The Geometry of Categorical and Hierarchical Concepts in Large Language Models [15.126806053878855]
We show how to extend the formalization of the linear representation hypothesis to represent features (e.g., is_animal) as vectors.
We use the formalization to prove a relationship between the hierarchical structure of concepts and the geometry of their representations.
We validate these theoretical results on the Gemma and LLaMA-3 large language models, estimating representations for 900+ hierarchically related concepts using data from WordNet.
arXiv Detail & Related papers (2024-06-03T16:34:01Z) - Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - DiConStruct: Causal Concept-based Explanations through Black-Box
Distillation [9.735426765564474]
We present DiConStruct, an explanation method that is both concept-based and causal.
Our explainer works as a distillation model to any black-box machine learning model by approximating its predictions while producing the respective explanations.
arXiv Detail & Related papers (2024-01-16T17:54:02Z) - An Axiomatic Approach to Model-Agnostic Concept Explanations [67.84000759813435]
We propose an approach to concept explanations that satisfy three natural axioms: linearity, recursivity, and similarity.
We then establish connections with previous concept explanation methods, offering insight into their varying semantic meanings.
arXiv Detail & Related papers (2024-01-12T20:53:35Z) - Semantics, Ontology and Explanation [0.0]
We discuss the relation between ontological unpacking and other forms of explanation in philosophy and science.
We also discuss the relation between ontological unpacking and other forms of explanation in the area of Artificial Intelligence.
arXiv Detail & Related papers (2023-04-21T16:54:34Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Explanation from Specification [3.04585143845864]
We formulate an approach where the type of explanation produced is guided by a specification.
Two examples are discussed: explanations for Bayesian networks using the theory of argumentation, and explanations for graph neural networks.
The approach is motivated by a theory of explanation in the philosophy of science, and it is related to current questions in the philosophy of science on the role of machine learning.
arXiv Detail & Related papers (2020-12-13T23:27:48Z) - Thinking About Causation: A Causal Language with Epistemic Operators [58.720142291102135]
We extend the notion of a causal model with a representation of the state of an agent.
On the side of the object language, we add operators to express knowledge and the act of observing new information.
We provide a sound and complete axiomatization of the logic, and discuss the relation of this framework to causal team semantics.
arXiv Detail & Related papers (2020-10-30T12:16:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.