A Taxonomy of Decentralized Identifier Methods for Practitioners
- URL: http://arxiv.org/abs/2311.03367v1
- Date: Wed, 18 Oct 2023 13:01:40 GMT
- Title: A Taxonomy of Decentralized Identifier Methods for Practitioners
- Authors: Felix Hoops, Alexander M\"uhle, Florian Matthes, Christoph Meinel
- Abstract summary: A core part of the new identity management paradigm of Self-Sovereign Identity (SSI) is the W3C Decentralized Identifiers (DIDs) standard.
We propose a taxonomy of DID methods with the goal to empower practitioners to make informed decisions when selecting DID methods.
- Score: 50.76687001060655
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A core part of the new identity management paradigm of Self-Sovereign
Identity (SSI) is the W3C Decentralized Identifiers (DIDs) standard. The
diversity of interoperable implementations encouraged by the paradigm is key
for a less centralized future, and it is made possible by the concept of DIDs.
However, this leads to a kind of dilemma of choices, where practitioners are
faced with the difficult decision of which methods to choose and support in
their applications. Due to the decentralized development of DID method
specifications and the overwhelming number of different choices, it is hard to
get an overview. In this paper, we propose a taxonomy of DID methods with the
goal to empower practitioners to make informed decisions when selecting DID
methods. To that end, our taxonomy is designed to provide an overview of the
current landscape while providing adoption-relevant characteristics. For this
purpose, we rely on the Nickerson et al. methodology for taxonomy creation,
utilizing both conceptual-to-empirical and empirical-to-conceptual approaches.
During the iterative process, we collect and survey an extensive and
potentially exhaustive list of around 160 DID methods from various sources. The
taxonomy we arrive at uses a total of 7 dimensions and 22 characteristics to
span the contemporary design space of DID methods from the perspective of a
practitioner. In addition to elaborating on these characteristics, we also
discuss how a practitioner can use the taxonomy to select suitable DID methods
for a specific use case.
Related papers
- Concept-based Explainable Artificial Intelligence: A Survey [16.580100294489508]
Using raw features to provide explanations has been disputed in several works lately.
A unified categorization and precise field definition are still missing.
This paper fills the gap by offering a thorough review of C-XAI approaches.
arXiv Detail & Related papers (2023-12-20T11:27:21Z) - Conceptual Engineering Using Large Language Models [0.0]
We use data from the Wikidata knowledge graph to evaluate stipulative definitions related to two conceptual engineering projects.
Our results show that classification procedures built using our approach can exhibit good classification performance.
We consider objections to this work for three aspects of theory and practice of conceptual engineering.
arXiv Detail & Related papers (2023-12-01T01:58:16Z) - Identifying Reasons for Bias: An Argumentation-Based Approach [2.9465623430708905]
We propose a novel model-agnostic argumentation-based method to determine why an individual is classified differently in comparison to similar individuals.
We evaluate our method on two datasets commonly used in the fairness literature and illustrate its effectiveness in the identification of bias.
arXiv Detail & Related papers (2023-10-25T09:47:15Z) - Learning Transferable Conceptual Prototypes for Interpretable
Unsupervised Domain Adaptation [79.22678026708134]
In this paper, we propose an inherently interpretable method, named Transferable Prototype Learning ( TCPL)
To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process.
Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts.
arXiv Detail & Related papers (2023-10-12T06:36:41Z) - SoK: Privacy-Preserving Data Synthesis [72.92263073534899]
This paper focuses on privacy-preserving data synthesis (PPDS) by providing a comprehensive overview, analysis, and discussion of the field.
We put forth a master recipe that unifies two prominent strands of research in PPDS: statistical methods and deep learning (DL)-based methods.
arXiv Detail & Related papers (2023-07-05T08:29:31Z) - Evaluation of Self-taught Learning-based Representations for Facial
Emotion Recognition [62.30451764345482]
This work describes different strategies to generate unsupervised representations obtained through the concept of self-taught learning for facial emotion recognition.
The idea is to create complementary representations promoting diversity by varying the autoencoders' initialization, architecture, and training data.
Experimental results on Jaffe and Cohn-Kanade datasets using a leave-one-subject-out protocol show that FER methods based on the proposed diverse representations compare favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2022-04-26T22:48:15Z) - Discovering Concepts in Learned Representations using Statistical
Inference and Interactive Visualization [0.76146285961466]
Concept discovery is important for bridging the gap between non-deep learning experts and model end-users.
Current approaches include hand-crafting concept datasets and then converting them to latent space directions.
In this study, we offer another two approaches to guide user discovery of meaningful concepts, one based on multiple hypothesis testing, and another on interactive visualization.
arXiv Detail & Related papers (2022-02-09T22:29:48Z) - How to choose an Explainability Method? Towards a Methodical
Implementation of XAI in Practice [3.974102831754831]
We argue there is a need for a methodology to bridge the gap between stakeholder needs and explanation methods.
We present our ongoing work on creating this methodology to help data scientists in the process of providing explainability to stakeholders.
arXiv Detail & Related papers (2021-07-09T13:22:58Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - Indecision Modeling [50.00689136829134]
It is important that AI systems act in ways which align with human values.
People are often indecisive, and especially so when their decision has moral implications.
arXiv Detail & Related papers (2020-12-15T18:32:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.