ONTrust: A Reference Ontology of Trust
- URL: http://arxiv.org/abs/2602.07662v1
- Date: Sat, 07 Feb 2026 18:47:34 GMT
- Title: ONTrust: A Reference Ontology of Trust
- Authors: Glenda Amaral, Tiago Prince Sales, Riccardo Baratella, Daniele Porello, Renata Guizzardi, Giancarlo Guizzardi,
- Abstract summary: This paper is the culmination of a long-term research program of providing a solid ontological foundation on trust.<n>A Reference Ontology of Trust (ONTrust) was developed, grounded on the Unified Foundational Ontology and specified in OntoUML.<n>ONTrust formally characterizes the concept of trust and its different types, describes the different factors that can influence trust, as well as explains how risk emerges from trust relations.
- Score: 1.068745732421904
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Trust has stood out more than ever in the light of recent innovations. Some examples are advances in artificial intelligence that make machines more and more humanlike, and the introduction of decentralized technologies (e.g. blockchains), which creates new forms of (decentralized) trust. These new developments have the potential to improve the provision of products and services, as well as to contribute to individual and collective well-being. However, their adoption depends largely on trust. In order to build trustworthy systems, along with defining laws, regulations and proper governance models for new forms of trust, it is necessary to properly conceptualize trust, so that it can be understood both by humans and machines. This paper is the culmination of a long-term research program of providing a solid ontological foundation on trust, by creating reference conceptual models to support information modeling, automated reasoning, information integration and semantic interoperability tasks. To address this, a Reference Ontology of Trust (ONTrust) was developed, grounded on the Unified Foundational Ontology and specified in OntoUML, which has been applied in several initiatives, to demonstrate, for example, how it can be used for conceptual modeling and enterprise architecture design, for language evaluation and (re)design, for trust management, for requirements engineering, and for trustworthy artificial intelligence (AI) in the context of affective Human-AI teaming. ONTrust formally characterizes the concept of trust and its different types, describes the different factors that can influence trust, as well as explains how risk emerges from trust relations. To illustrate the working of ONTrust, the ontology is applied to model two case studies extracted from the literature.
Related papers
- Revisiting Trust in the Era of Generative AI: Factorial Structure and Latent Profiles [5.109743403025609]
Trust is one of the most important factors shaping whether and how people adopt and rely on artificial intelligence (AI)<n>Most existing studies measure trust in terms of functionality, focusing on whether a system is reliable, accurate, or easy to use.<n>In this study, we introduce and validate the Human-AI Trust Scale (HAITS), a new measure designed to capture both the rational and relational aspects of trust in GenAI.
arXiv Detail & Related papers (2025-10-11T12:39:53Z) - Ties of Trust: a bowtie model to uncover trustor-trustee relationships in LLMs [1.1149261035759372]
We introduce a bowtie model for conceptualizing and formulating trust in Large Language Models (LLMs)<n>A core component comprehensively explores trust by tying its two sides, namely the trustor and the trustee, as well as their intricate relationships.<n>We uncover these relationships within the proposed bowtie model and beyond to its sociotechnical ecosystem.
arXiv Detail & Related papers (2025-06-11T11:42:52Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - On the Need to Rethink Trust in AI Assistants for Software Development: A Critical Review [16.774993642353724]
Trust is a fundamental concept in human decision-making and collaboration.<n>Software engineering articles often use the term trust informally.<n> Related disciplines commonly embed their methodology and results in established trust models.
arXiv Detail & Related papers (2025-04-16T19:52:21Z) - On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective [377.2483044466149]
Generative Foundation Models (GenFMs) have emerged as transformative tools.<n>Their widespread adoption raises critical concerns regarding trustworthiness across dimensions.<n>This paper presents a comprehensive framework to address these challenges through three key contributions.
arXiv Detail & Related papers (2025-02-20T06:20:36Z) - Distributed Trust Through the Lens of Software Architecture [13.732161898452377]
This paper will survey the concept of distributed trust in multiple disciplines.
It will take a system/software architecture point of view to look at trust redistribution/shift and the associated tradeoffs in systems and applications enabled by distributed trust technologies.
arXiv Detail & Related papers (2023-05-25T06:53:18Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.