Algebraic Models for Qualified Aggregation in General Rough Sets, and
Reasoning Bias Discovery
- URL: http://arxiv.org/abs/2309.03217v2
- Date: Fri, 22 Sep 2023 21:36:57 GMT
- Title: Algebraic Models for Qualified Aggregation in General Rough Sets, and
Reasoning Bias Discovery
- Authors: A Mani
- Abstract summary: The research is motivated by the desire to model skeptical or pessimistic, and optimistic or possibilistic aggregation in human reasoning.
The model is suitable for the study of discriminatory/toxic behavior in human reasoning, and of ML algorithms learning such behavior.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the context of general rough sets, the act of combining two things to form
another is not straightforward. The situation is similar for other theories
that concern uncertainty and vagueness. Such acts can be endowed with
additional meaning that go beyond structural conjunction and disjunction as in
the theory of $*$-norms and associated implications over $L$-fuzzy sets. In the
present research, algebraic models of acts of combining things in generalized
rough sets over lattices with approximation operators (called rough convenience
lattices) is invented. The investigation is strongly motivated by the desire to
model skeptical or pessimistic, and optimistic or possibilistic aggregation in
human reasoning, and the choice of operations is constrained by the
perspective. Fundamental results on the weak negations and implications
afforded by the minimal models are proved. In addition, the model is suitable
for the study of discriminatory/toxic behavior in human reasoning, and of ML
algorithms learning such behavior.
Related papers
- Hard to Explain: On the Computational Hardness of In-Distribution Model Interpretation [0.9558392439655016]
The ability to interpret Machine Learning (ML) models is becoming increasingly essential.
Recent work has demonstrated that it is possible to formally assess interpretability by studying the computational complexity of explaining the decisions of various models.
arXiv Detail & Related papers (2024-08-07T17:20:52Z) - Prediction Instability in Machine Learning Ensembles [0.0]
We prove a theorem that shows that any ensemble will exhibit at least one of the following forms of prediction instability.
It will either ignore agreement among all underlying models, change its mind when none of the underlying models have done so, or be manipulable through inclusion or exclusion of options it would never actually predict.
This analysis also sheds light on what specific forms of prediction instability to expect from particular ensemble algorithms.
arXiv Detail & Related papers (2024-07-03T15:26:02Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Quantifying the Sensitivity of Inverse Reinforcement Learning to
Misspecification [72.08225446179783]
Inverse reinforcement learning aims to infer an agent's preferences from their behaviour.
To do this, we need a behavioural model of how $pi$ relates to $R$.
We analyse how sensitive the IRL problem is to misspecification of the behavioural model.
arXiv Detail & Related papers (2024-03-11T16:09:39Z) - Invariant Causal Set Covering Machines [64.86459157191346]
Rule-based models, such as decision trees, appeal to practitioners due to their interpretable nature.
However, the learning algorithms that produce such models are often vulnerable to spurious associations and thus, they are not guaranteed to extract causally-relevant insights.
We propose Invariant Causal Set Covering Machines, an extension of the classical Set Covering Machine algorithm for conjunctions/disjunctions of binary-valued rules that provably avoids spurious associations.
arXiv Detail & Related papers (2023-06-07T20:52:01Z) - Finding Alignments Between Interpretable Causal Variables and
Distributed Neural Representations [62.65877150123775]
Causal abstraction is a promising theoretical framework for explainable artificial intelligence.
Existing causal abstraction methods require a brute-force search over alignments between the high-level model and the low-level one.
We present distributed alignment search (DAS), which overcomes these limitations.
arXiv Detail & Related papers (2023-03-05T00:57:49Z) - Unifying Causal Inference and Reinforcement Learning using Higher-Order
Category Theory [4.119151469153588]
We present a unified formalism for structure discovery of causal models and predictive state representation models in reinforcement learning.
Specifically, we model structure discovery in both settings using simplicial objects.
arXiv Detail & Related papers (2022-09-13T19:04:18Z) - Rationales for Sequential Predictions [117.93025782838123]
Sequence models are a critical component of modern NLP systems, but their predictions are difficult to explain.
We consider model explanations though rationales, subsets of context that can explain individual model predictions.
We propose an efficient greedy algorithm to approximate this objective.
arXiv Detail & Related papers (2021-09-14T01:25:15Z) - Understanding Double Descent Requires a Fine-Grained Bias-Variance
Decomposition [34.235007566913396]
We describe an interpretable, symmetric decomposition of the variance into terms associated with the labels.
We find that the bias decreases monotonically with the network width, but the variance terms exhibit non-monotonic behavior.
We also analyze the strikingly rich phenomenology that arises.
arXiv Detail & Related papers (2020-11-04T21:04:02Z) - Model Interpretability through the Lens of Computational Complexity [1.6631602844999724]
We study whether folklore interpretability claims have a correlate in terms of computational complexity theory.
We show that both linear and tree-based models are strictly more interpretable than neural networks.
arXiv Detail & Related papers (2020-10-23T09:50:40Z) - Pairwise Supervision Can Provably Elicit a Decision Boundary [84.58020117487898]
Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
arXiv Detail & Related papers (2020-06-11T05:35:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.