Coarse Set Theory: A Mathematical Foundation for Coarse Ethics
- URL: http://arxiv.org/abs/2502.07347v1
- Date: Tue, 11 Feb 2025 08:18:37 GMT
- Title: Coarse Set Theory: A Mathematical Foundation for Coarse Ethics
- Authors: Takashi Izumo,
- Abstract summary: This paper introduces Coarse Set Theory (CST) to establish a mathematical framework for Coarse Ethics (CE)
We define coarse sets using totally ordered sets and propose axioms that characterize the hierarchical relationships between elements and their groupings.
We extend this framework by defining coarse mappings, which transform detailed individual data into coarser representations.
- Score: 0.0
- License:
- Abstract: In ethical decision-making, individuals are often evaluated based on generalized assessments rather than precise individual performance. This concept, known as Coarse Ethics (CE), has primarily been discussed in natural language without a formal mathematical foundation. This paper introduces Coarse Set Theory (CST) to establish a mathematical framework for CE. We define coarse sets using totally ordered sets and propose axioms that characterize the hierarchical relationships between elements and their groupings. Additionally, we introduce coarse-grained sets, which partition an underlying set into equivalence classes based on predefined criteria. We extend this framework by defining coarse mappings, which transform detailed individual data into coarser representations while maintaining essential structural properties. To measure the information loss, we employ Kullback-Leibler (KL) divergence, demonstrating how different coarse partitions affect the preservation of information. We illustrate how CST can be applied to real-world grading systems through theoretical formulations and empirical analysis. This study provides a rigorous foundation for CE, enabling a more systematic exploration of fairness, interpretability, and decision-making trade-offs.
Related papers
- The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.
Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.
The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Provable Compositional Generalization for Object-Centric Learning [55.658215686626484]
Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception.
We show that autoencoders that satisfy structural assumptions on the decoder and enforce encoder-decoder consistency will learn object-centric representations that provably generalize compositionally.
arXiv Detail & Related papers (2023-10-09T01:18:07Z) - A Category-theoretical Meta-analysis of Definitions of Disentanglement [97.34033555407403]
Disentangling the factors of variation in data is a fundamental concept in machine learning.
This paper presents a meta-analysis of existing definitions of disentanglement.
arXiv Detail & Related papers (2023-05-11T15:24:20Z) - Towards Understanding the Mechanism of Contrastive Learning via
Similarity Structure: A Theoretical Analysis [10.29814984060018]
We consider a kernel-based contrastive learning framework termed Kernel Contrastive Learning (KCL)
We introduce a formulation of the similarity structure of learned representations by utilizing a statistical dependency viewpoint.
We show a new upper bound of the classification error of a downstream task, which explains that our theory is consistent with the empirical success of contrastive learning.
arXiv Detail & Related papers (2023-04-01T21:53:29Z) - Linear Spaces of Meanings: Compositional Structures in Vision-Language
Models [110.00434385712786]
We investigate compositional structures in data embeddings from pre-trained vision-language models (VLMs)
We first present a framework for understanding compositional structures from a geometric perspective.
We then explain what these structures entail probabilistically in the case of VLM embeddings, providing intuitions for why they arise in practice.
arXiv Detail & Related papers (2023-02-28T08:11:56Z) - Variational Cross-Graph Reasoning and Adaptive Structured Semantics
Learning for Compositional Temporal Grounding [143.5927158318524]
Temporal grounding is the task of locating a specific segment from an untrimmed video according to a query sentence.
We introduce a new Compositional Temporal Grounding task and construct two new dataset splits.
We argue that the inherent structured semantics inside the videos and language is the crucial factor to achieve compositional generalization.
arXiv Detail & Related papers (2023-01-22T08:02:23Z) - Relational Proxies: Emergent Relationships as Fine-Grained
Discriminators [52.17542855760418]
We propose a novel approach that leverages information between the global and local part of an object for encoding its label.
We design Proxies based on our theoretical findings and evaluate it on seven challenging fine-grained benchmark datasets.
We also experimentally validate our theory and obtain consistent results across multiple benchmarks.
arXiv Detail & Related papers (2022-10-05T11:08:04Z) - Granular Directed Rough Sets, Concept Organization and Soft Clustering [0.0]
Up-directed rough sets are introduced and studied by the present author in earlier papers.
This is extended by her in two different granular directions, with a surprising algebraic semantics.
This research is expected to see significant theoretical and practical applications in related domains.
arXiv Detail & Related papers (2022-08-13T11:01:05Z) - CURI: A Benchmark for Productive Concept Learning Under Uncertainty [33.83721664338612]
We introduce a new few-shot, meta-learning benchmark, Compositional Reasoning Under Uncertainty (CURI)
CURI evaluates different aspects of productive and systematic generalization, including abstract understandings of disentangling, productive generalization, learning operations, variable binding, etc.
It also defines a model-independent "compositionality gap" to evaluate the difficulty of generalizing out-of-distribution along each of these axes.
arXiv Detail & Related papers (2020-10-06T16:23:17Z) - Expressiveness and machine processability of Knowledge Organization
Systems (KOS): An analysis of concepts and relations [0.0]
The potential of both the expressiveness and machine processability of each Knowledge Organization System is extensively regulated by its structural rules.
Ontologies explicitly define diverse types of relations, and are by their nature machine-processable.
arXiv Detail & Related papers (2020-03-11T12:35:52Z) - The Contextuality-by-Default View of the Sheaf-Theoretic Approach to
Contextuality [0.0]
Sheaf-Theoretic Contextuality (STC) theory is a very general account of whether multiply overlapping subsets of a set can be viewed as inheriting this structure from a global structure imposed on the entire set.
I show that when STC is applied to systems of random variables, it can be recast in the language of the Contextuality-by-Default (CbD) theory.
I show that it can be resolved by considering systems with multiple possible deterministic realizations as quasi-probabilistic systems with Bayesian priors assigned to the realizations.
arXiv Detail & Related papers (2019-06-06T17:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.