Algebraic Semantics of Generalized RIFs
- URL: http://arxiv.org/abs/2109.12998v1
- Date: Sun, 12 Sep 2021 20:06:07 GMT
- Title: Algebraic Semantics of Generalized RIFs
- Authors: A Mani
- Abstract summary: Weak quasi rough inclusion functions (wqRIFs) are generalized to general granular operator spaces with scope for limiting contamination.
This potentially contributes to improving the selection (possibly automatic) of such functions, training methods, and reducing contamination (and data intrusion) in applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A number of numeric measures like rough inclusion functions (RIFs) are used
in general rough sets and soft computing. But these are often intrusive by
definition, and amount to making unjustified assumptions about the data. The
contamination problem is also about recognizing the domains of discourses
involved in this, specifying errors and reducing data intrusion relative to
them. In this research, weak quasi rough inclusion functions (wqRIFs) are
generalized to general granular operator spaces with scope for limiting
contamination. New algebraic operations are defined over collections of such
functions, and are studied by the present author. It is shown by her that the
algebras formed by the generalized wqRIFs are ordered hemirings with additional
operators. By contrast, the generalized rough inclusion functions lack similar
structure. This potentially contributes to improving the selection (possibly
automatic) of such functions, training methods, and reducing contamination (and
data intrusion) in applications. The underlying framework and associated
concepts are explained in some detail, as they are relatively new.
Related papers
- MathGAP: Out-of-Distribution Evaluation on Problems with Arbitrarily Complex Proofs [80.96119560172224]
Large language models (LLMs) can solve arithmetic word problems with high accuracy, but little is known about how well they generalize to problems that are more complex than the ones on which they have been trained.
We present a framework for evaluating LLMs on problems with arbitrarily complex arithmetic proofs, called MathGAP.
arXiv Detail & Related papers (2024-10-17T12:48:14Z) - Information-Theoretic Measures on Lattices for High-Order Interactions [0.7373617024876725]
We present a systematic framework that derives higher-order information-theoretic measures using lattice and operator function pairs.
We show that many commonly used measures can be derived within this framework, however they are often restricted to sublattices of the partition lattice.
To fully characterise all interactions among $d$ variables, we introduce the Streitberg Information, using generalisations of KL divergence as an operator function.
arXiv Detail & Related papers (2024-08-14T13:04:34Z) - What's in a Prior? Learned Proximal Networks for Inverse Problems [9.934876060237345]
Proximal operators are ubiquitous in inverse problems, commonly appearing as part of strategies to regularize problems that are otherwise ill-posed.
Modern deep learning models have been brought to bear for these tasks too, as in the framework of plug-and-play or deep unrolling.
arXiv Detail & Related papers (2023-10-22T16:31:01Z) - iSCAN: Identifying Causal Mechanism Shifts among Nonlinear Additive
Noise Models [48.33685559041322]
This paper focuses on identifying the causal mechanism shifts in two or more related datasets over the same set of variables.
Code implementing the proposed method is open-source and publicly available at https://github.com/kevinsbello/iSCAN.
arXiv Detail & Related papers (2023-06-30T01:48:11Z) - On Computing Probabilistic Abductive Explanations [30.325691263226968]
The most widely studied explainable AI (XAI) approaches are unsound.
PI-explanations also exhibit important drawbacks, the most visible of which is arguably their size.
This paper investigates practical approaches for computing relevant sets for a number of widely used classifiers.
arXiv Detail & Related papers (2022-12-12T15:47:10Z) - PatchMix Augmentation to Identify Causal Features in Few-shot Learning [55.64873998196191]
Few-shot learning aims to transfer knowledge learned from base with sufficient categories labelled data to novel categories with scarce known information.
We propose a novel data augmentation strategy dubbed as PatchMix that can break this spurious dependency.
We show that such an augmentation mechanism, different from existing ones, is able to identify the causal features.
arXiv Detail & Related papers (2022-11-29T08:41:29Z) - Learning Aggregation Functions [78.47770735205134]
We introduce LAF (Learning Aggregation Functions), a learnable aggregator for sets of arbitrary cardinality.
We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures.
arXiv Detail & Related papers (2020-12-15T18:28:53Z) - Importance Weight Estimation and Generalization in Domain Adaptation
under Label Shift [8.10196482629998]
We study generalization under label shift in domain adaptation where the learner has access to labeled samples from the source domain but unlabeled samples from the target domain.
We introduce a new operator learning approach between Hilbert spaces defined on labels.
We show the generalization property of the importance weighted empirical risk minimization on the unseen target domain.
arXiv Detail & Related papers (2020-11-29T01:37:58Z) - The Role of Mutual Information in Variational Classifiers [47.10478919049443]
We study the generalization error of classifiers relying on encodings trained on the cross-entropy loss.
We derive bounds to the generalization error showing that there exists a regime where the generalization error is bounded by the mutual information.
arXiv Detail & Related papers (2020-10-22T12:27:57Z) - Self-training Avoids Using Spurious Features Under Domain Shift [54.794607791641745]
In unsupervised domain adaptation, conditional entropy minimization and pseudo-labeling work even when the domain shifts are much larger than those analyzed by existing theory.
We identify and analyze one particular setting where the domain shift can be large, but certain spurious features correlate with label in the source domain but are independent label in the target.
arXiv Detail & Related papers (2020-06-17T17:51:42Z) - Text Classification with Few Examples using Controlled Generalization [58.971750512415134]
Current practice relies on pre-trained word embeddings to map words unseen in training to similar seen ones.
Our alternative begins with sparse pre-trained representations derived from unlabeled parsed corpora.
We show that a feed-forward network over these vectors is especially effective in low-data scenarios.
arXiv Detail & Related papers (2020-05-18T06:04:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.