Modelling Trust and Trusted Systems: A Category Theoretic Approach
- URL: http://arxiv.org/abs/2602.11376v1
- Date: Wed, 11 Feb 2026 21:08:51 GMT
- Title: Modelling Trust and Trusted Systems: A Category Theoretic Approach
- Authors: Ian Oliver, Pekka Kuure,
- Abstract summary: We formalize elements, claims, results, and decisions as objects within a category.<n>The framework provides a rigorous approach to understanding trust establishment.<n>We present a number of worked examples including boot-run-shutdown sequences and Evil Maid attacks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduces a category-theoretic framework for modelling trust as applied to trusted computation systems and remote attestation. By formalizing elements, claims, results, and decisions as objects within a category, and the processes of attestation, verification, and decision-making as morphisms, the framework provides a rigorous approach to understanding trust establishment and provides a well-defined semantics for terms such as `trustworthiness' and 'justification'/forensics. The trust decision space is formalized using a Heyting Algebra, allowing nuanced trust levels that extend beyond binary trusted/untrusted states. We then present additional structures and in particular utilise exponentiation in a category theoretic sense to define compositions of attestation operations and provide the basis of a measurement for the expressibility of an attestation environment. We present a number of worked examples including boot-run-shutdown sequences, Evil Maid attacks and the specification of an attestation environment based upon this model. We then address challenges in modelling dynamic and larger systems made of multiple compositions.
Related papers
- Composable Attestation: A Generalized Framework for Continuous and Incremental Trust in AI-Driven Distributed Systems [4.2822349607372265]
This paper presents composable attestation as a generalized cryptographic framework for Continuous and Incremental Trust in Distributed Systems.<n>We establish a rigorous mathematical foundation which is defining core properties of such attestation systems.<n>The framework's utility extends to applications such as secure AI model integrity verification, federated learning, and runtime trust assurance.
arXiv Detail & Related papers (2026-03-02T22:45:26Z) - Trust in One Round: Confidence Estimation for Large Language Models via Structural Signals [13.89434979851652]
Large language models (LLMs) are increasingly deployed in domains where errors carry high social, scientific, or safety costs.<n>We present Structural Confidence, a single-pass, model-agnostic framework that enhances output correctness prediction.
arXiv Detail & Related papers (2026-02-01T02:35:59Z) - Beyond Predictive Uncertainty: Reliable Representation Learning with Structural Constraints [0.3948325938742681]
We argue that reliability should be regarded as a first-class property of learned representations themselves.<n>We propose a principled framework for reliable representation learning that explicitly models representation-level uncertainty.<n>Our approach introduces uncertainty-aware regularization directly in the representation space, encouraging representations that are not only predictive but also stable, well-calibrated, and robust to noise and structural perturbations.
arXiv Detail & Related papers (2026-01-22T18:19:52Z) - Towards the Formalization of a Trustworthy AI for Mining Interpretable Models explOiting Sophisticated Algorithms [4.587316936127635]
Interpretable-by-design models are crucial for fostering trust, accountability, and safe adoption of automated decision-making models in real-world applications.<n>We formalize a comprehensive methodology for generating predictive models that balance interpretability with performance.<n>By evaluating ethical measures during model generation, this framework establishes the theoretical foundations for developing AI systems.
arXiv Detail & Related papers (2025-10-23T14:54:33Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - Security Properties through the Lens of Modal Logic [4.548429316641551]
We introduce a framework for reasoning about the security of computer systems using modal logic.
We show how to use our formalism to represent various variants of confidentiality, integrity, robust declassification and transparent endorsement.
arXiv Detail & Related papers (2023-09-18T07:37:12Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z) - Structural Causal Models Are (Solvable by) Credal Networks [70.45873402967297]
Causal inferences can be obtained by standard algorithms for the updating of credal nets.
This contribution should be regarded as a systematic approach to represent structural causal models by credal networks.
Experiments show that approximate algorithms for credal networks can immediately be used to do causal inference in real-size problems.
arXiv Detail & Related papers (2020-08-02T11:19:36Z) - A general framework for defining and optimizing robustness [74.67016173858497]
We propose a rigorous and flexible framework for defining different types of robustness properties for classifiers.
Our concept is based on postulates that robustness of a classifier should be considered as a property that is independent of accuracy.
We develop a very general robustness framework that is applicable to any type of classification model.
arXiv Detail & Related papers (2020-06-19T13:24:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.