Trust Me If You Can: Trusted Transformation Between (JSON) Schemas to
Support Global Authentication of Education Credentials
- URL: http://arxiv.org/abs/2106.12793v1
- Date: Thu, 24 Jun 2021 07:03:23 GMT
- Title: Trust Me If You Can: Trusted Transformation Between (JSON) Schemas to
Support Global Authentication of Education Credentials
- Authors: Stefan More and Peter Grassberger and Felix H\"orandner and Andreas
Abraham and Lukas Daniel Klausner
- Abstract summary: Recruiters and institutions around the world struggle with the verification of diplomas issued in a diverse and global education setting.
We introduce a decentralized and open system to automatically verify the legitimacy of issuers and interpret credentials in unknown schemas.
- Score: 0.27961972519572437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recruiters and institutions around the world struggle with the verification
of diplomas issued in a diverse and global education setting. Firstly, it is a
nontrivial problem to identify bogus institutions selling education
credentials. While institutions are often accredited by qualified authorities
on a regional level, there is no global authority fulfilling this task.
Secondly, many different data schemas are used to encode education credentials,
which represents a considerable challenge to automated processing.
Consequently, significant manual effort is required to verify credentials.
In this paper, we tackle these challenges by introducing a decentralized and
open system to automatically verify the legitimacy of issuers and interpret
credentials in unknown schemas. We do so by enabling participants to publish
transformation information, which enables verifiers to transform credentials
into their preferred schema. Due to the lack of a global root of trust, we
utilize a distributed ledger to build a decentralized web of trust, which
verifiers can query to gather information on the trustworthiness of issuing
institutions and to establish trust in transformation information. Going beyond
diploma fraud, our system can be generalized to tackle the generalized problem
for other domains lacking a root of trust and agreements on data schemas.
Related papers
- Lifecycle Management of Resumés with Decentralized Identifiers and Verifiable Credentials [0.0]
This paper introduces a trust framework for managing digital resum'e credentials.
We propose a framework for real-time issuance, storage and verification of Verifiable Credentials without intermediaries.
arXiv Detail & Related papers (2024-06-17T13:37:44Z) - TELLER: A Trustworthy Framework for Explainable, Generalizable and Controllable Fake News Detection [37.394874500480206]
We propose a novel framework for trustworthy fake news detection that prioritizes explainability, generalizability and controllability of models.
This is achieved via a dual-system framework that integrates cognition and decision systems.
We present comprehensive evaluation results on four datasets, demonstrating the feasibility and trustworthiness of our proposed framework.
arXiv Detail & Related papers (2024-02-12T16:41:54Z) - AI and Democracy's Digital Identity Crisis [0.0]
Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder.
In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based.
We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors.
arXiv Detail & Related papers (2023-09-25T14:15:18Z) - FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature [60.99054146321459]
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
We propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV.
arXiv Detail & Related papers (2023-05-10T12:10:02Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Understanding metric-related pitfalls in image analysis validation [59.15220116166561]
This work provides the first comprehensive common point of access to information on pitfalls related to validation metrics in image analysis.
Focusing on biomedical image analysis but with the potential of transfer to other fields, the addressed pitfalls generalize across application domains and are categorized according to a newly created, domain-agnostic taxonomy.
arXiv Detail & Related papers (2023-02-03T14:57:40Z) - VeriFi: Towards Verifiable Federated Unlearning [59.169431326438676]
Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data.
Leaving participant has the right to request to delete its private data from the global model.
We propose VeriFi, a unified framework integrating federated unlearning and verification.
arXiv Detail & Related papers (2022-05-25T12:20:02Z) - FacTeR-Check: Semi-automated fact-checking through Semantic Similarity
and Natural Language Inference [61.068947982746224]
FacTeR-Check enables retrieving fact-checked information, unchecked claims verification and tracking dangerous information over social media.
The architecture is validated using a new dataset called NLI19-SP that is publicly released with COVID-19 related hoaxes and tweets from Spanish social media.
Our results show state-of-the-art performance on the individual benchmarks, as well as producing useful analysis of the evolution over time of 61 different hoaxes.
arXiv Detail & Related papers (2021-10-27T15:44:54Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z) - A Distributed Trust Framework for Privacy-Preserving Machine Learning [4.282091426377838]
This paper outlines a distributed infrastructure which is used to facilitate peer-to-peer trust between distributed agents.
We detail a proof of concept using Hyperledger Aries, Decentralised Identifiers (DIDs) and Verifiable Credentials (VCs)
arXiv Detail & Related papers (2020-06-03T18:06:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.