A Trust-Centric Approach To Quantifying Maturity and Security in Internet Voting Protocols
- URL: http://arxiv.org/abs/2412.10611v1
- Date: Fri, 13 Dec 2024 23:33:38 GMT
- Title: A Trust-Centric Approach To Quantifying Maturity and Security in Internet Voting Protocols
- Authors: Stanisław Barański, Ben Biedermann, Joshua Ellul,
- Abstract summary: This paper introduces a trust-centric maturity scoring framework to quantify the security and maturity of internet voting systems.
A comprehensive trust model analysis is conducted for selected internet voting protocols.
The framework is general enough to be applied to other systems, where the aspects of decentralization, trust, and security are crucial.
- Score: 0.9831489366502298
- License:
- Abstract: Voting is a cornerstone of collective participatory decision-making in contexts ranging from political elections to decentralized autonomous organizations (DAOs). Despite the proliferation of internet voting protocols promising enhanced accessibility and efficiency, their evaluation and comparison are complicated by a lack of standardized criteria and unified definitions of security and maturity. Furthermore, socio-technical requirements by decision makers are not structurally taken into consideration when comparing internet voting systems. This paper addresses this gap by introducing a trust-centric maturity scoring framework to quantify the security and maturity of sixteen internet voting systems. A comprehensive trust model analysis is conducted for selected internet voting protocols, examining their security properties, trust assumptions, technical complexity, and practical usability. In this paper we propose the electronic voting maturity framework (EVMF) which supports nuanced assessment that reflects real-world deployment concerns and aids decision-makers in selecting appropriate systems tailored to their specific use-case requirements. The framework is general enough to be applied to other systems, where the aspects of decentralization, trust, and security are crucial, such as digital identity, Ethereum layer-two scaling solutions, and federated data infrastructures. Its objective is to provide an extendable toolkit for policy makers and technology experts alike that normalizes technical and non-technical requirements on a univariate scale.
Related papers
- Assessing the Trustworthiness of Electronic Identity Management Systems: Framework and Insights from Inception to Deployment [9.132025152225447]
This paper introduces an integrated Digital Identity Systems Trustworthiness Assessment Framework (DISTAF)
It is supported by over 65 mechanisms and over 400 metrics derived from international standards and technical guidelines.
We demonstrate the application of DISTAF through a real-world implementation using a Modular Open Source Identity Platform (MOSIP) instance.
arXiv Detail & Related papers (2025-02-15T11:26:30Z) - Distributed Identity for Zero Trust and Segmented Access Control: A Novel Approach to Securing Network Infrastructure [4.169915659794567]
This study assesses security improvements achieved when distributed identity is employed with ZTA principle.
The study suggests adopting distributed identities can enhance overall security postures by an order of magnitude.
The research recommends refining technical standards, expanding the use of distributed identity in practice, and its applications for the contemporary digital security landscape.
arXiv Detail & Related papers (2025-01-14T00:02:02Z) - Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
We propose Authenticated Cyclic Redundancy Integrity Check (ACRIC)
ACRIC preserves backward compatibility without requiring additional hardware and is protocol agnostic.
We show that ACRIC offers robust security with minimal transmission overhead ( 1 ms)
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - Meta-Sealing: A Revolutionizing Integrity Assurance Protocol for Transparent, Tamper-Proof, and Trustworthy AI System [0.0]
This research introduces Meta-Sealing, a cryptographic framework that fundamentally changes integrity verification in AI systems.
The framework combines advanced cryptography with distributed verification, delivering tamper-evident guarantees that achieve both mathematical rigor and computational efficiency.
arXiv Detail & Related papers (2024-10-31T15:31:22Z) - Assistive AI for Augmenting Human Decision-making [3.379906135388703]
The paper shows how AI can assist in the complex process of decision-making while maintaining human oversight.
Central to our framework are the principles of privacy, accountability, and credibility.
arXiv Detail & Related papers (2024-10-18T10:16:07Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Fairness Score and Process Standardization: Framework for Fairness
Certification in Artificial Intelligence Systems [0.4297070083645048]
We propose a novel Fairness Score to measure the fairness of a data-driven AI system.
It will also provide a framework to operationalise the concept of fairness and facilitate the commercial deployment of such systems.
arXiv Detail & Related papers (2022-01-10T15:45:12Z) - Benchmarks for Deep Off-Policy Evaluation [152.28569758144022]
We present a collection of policies that can be used for benchmarking off-policy evaluation.
The goal of our benchmark is to provide a standardized measure of progress that is motivated from a set of principles.
We provide open-source access to our data and code to foster future research in this area.
arXiv Detail & Related papers (2021-03-30T18:09:33Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.