Relativistic Conceptions of Trustworthiness: Implications for the
Trustworthy Status of National Identification Systems
- URL: http://arxiv.org/abs/2112.09674v2
- Date: Thu, 7 Jul 2022 12:14:35 GMT
- Title: Relativistic Conceptions of Trustworthiness: Implications for the
Trustworthy Status of National Identification Systems
- Authors: Paul R. Smart, Wendy Hall, Michael Boniface
- Abstract summary: This article outlines a new account of trustworthiness, dubbed the expectation-oriented account.
To be trustworthy, we suggest, is to minimize the error associated with trustor expectations in situations of social dependency.
In addition to outlining the features of the expectation-oriented account, we describe some of the implications of this account for the design, development, and management of trustworthy NISs.
- Score: 1.4728207711693404
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trustworthiness is typically regarded as a desirable feature of national
identification systems (NISs); but the variegated nature of the trustor
communities associated with such systems makes it difficult to see how a single
system could be equally trustworthy to all actual and potential trustors. This
worry is accentuated by common theoretical accounts of trustworthiness.
According to such accounts, trustworthiness is relativized to particular
individuals and particular areas of activity, such that one can be trustworthy
with regard to some individuals in respect of certain matters, but not
trustworthy with regard to all trustors in respect of every matter. The present
article challenges this relativistic approach to trustworthiness by outlining a
new account of trustworthiness, dubbed the expectation-oriented account. This
account allows for the possibility of an absolutist (or one-place) approach to
trustworthiness. Such an account, we suggest, is the approach that best
supports the effort to develop NISs. To be trustworthy, we suggest, is to
minimize the error associated with trustor expectations in situations of social
dependency (commonly referred to as trust situations), and to be trustworthy in
an absolute sense is to assign equal value to all expectation-related errors in
all trust situations. In addition to outlining the features of the
expectation-oriented account, we describe some of the implications of this
account for the design, development, and management of trustworthy NISs.
Related papers
- When to Trust LLMs: Aligning Confidence with Response Quality [49.371218210305656]
We propose CONfidence-Quality-ORDer-preserving alignment approach (CONQORD)
It integrates quality reward and order-preserving alignment reward functions.
Experiments demonstrate that CONQORD significantly improves the alignment performance between confidence and response accuracy.
arXiv Detail & Related papers (2024-04-26T09:42:46Z) - U-Trustworthy Models.Reliability, Competence, and Confidence in
Decision-Making [0.21756081703275998]
We present a precise mathematical definition of trustworthiness, termed $mathcalU$-trustworthiness.
Within the context of $mathcalU$-trustworthiness, we prove that properly-ranked models are inherently $mathcalU$-trustworthy.
We advocate for the adoption of the AUC metric as the preferred measure of trustworthiness.
arXiv Detail & Related papers (2024-01-04T04:58:02Z) - Trust and Transparency in Recommender Systems [0.0]
We first go through different understandings and measurements of trust in the AI and RS community, such as demonstrated and perceived trust.
We then review the relationsships between trust and transparency, as well as mental models, and investigate different strategies to achieve transparency in RS.
arXiv Detail & Related papers (2023-04-17T09:09:48Z) - Distrust in (X)AI -- Measurement Artifact or Distinct Construct? [0.0]
Trust is a key motivation in developing explainable artificial intelligence (XAI)
Distrust seems relatively understudied in XAI.
psychometric evidence favors a distinction between trust and distrust.
arXiv Detail & Related papers (2023-03-29T07:14:54Z) - The Many Facets of Trust in AI: Formalizing the Relation Between Trust
and Fairness, Accountability, and Transparency [4.003809001962519]
Efforts to promote fairness, accountability, and transparency are assumed to be critical in fostering Trust in AI (TAI)
The lack of exposition on trust itself suggests that trust is commonly understood, uncomplicated, or even uninteresting.
Our analysis of TAI publications reveals numerous orientations which differ in terms of who is doing the trusting (agent), in what (object), on the basis of what (basis), in order to what (objective), and why (impact)
arXiv Detail & Related papers (2022-08-01T08:26:57Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - On the Relation of Trust and Explainability: Why to Engineer for
Trustworthiness [0.0]
One of the primary motivators for such requirements is that explainability is expected to facilitate stakeholders' trust in a system.
Recent psychological studies indicate that explanations do not necessarily facilitate trust.
We argue that even though trustworthiness does not automatically lead to trust, there are several reasons to engineer primarily for trustworthiness.
arXiv Detail & Related papers (2021-08-11T18:02:08Z) - Insights into Fairness through Trust: Multi-scale Trust Quantification
for Financial Deep Learning [94.65749466106664]
A fundamental aspect of fairness that has not been explored in financial deep learning is the concept of trust.
We conduct multi-scale trust quantification on a deep neural network for the purpose of credit card default prediction.
arXiv Detail & Related papers (2020-11-03T19:05:07Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z) - How Much Can We Really Trust You? Towards Simple, Interpretable Trust
Quantification Metrics for Deep Neural Networks [94.65749466106664]
We conduct a thought experiment and explore two key questions about trust in relation to confidence.
We introduce a suite of metrics for assessing the overall trustworthiness of deep neural networks based on their behaviour when answering a set of questions.
The proposed metrics are by no means perfect, but the hope is to push the conversation towards better metrics.
arXiv Detail & Related papers (2020-09-12T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.