Exploring the Risks and Challenges of National Electronic Identity (NeID) System
- URL: http://arxiv.org/abs/2310.15813v1
- Date: Tue, 24 Oct 2023 13:09:50 GMT
- Title: Exploring the Risks and Challenges of National Electronic Identity (NeID) System
- Authors: Jide Edu, Mark Hooper, Carsten Maple, Jon Crowcroft,
- Abstract summary: We discuss the different categories of NeID risk and explore the successful deployment of these systems.
We highlight the best practices for mitigating risk, including implementing strong security measures, conducting regular risk assessments, and involving stakeholders in the design and implementation of the system.
- Score: 8.93312157123729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many countries have embraced national electronic identification (NeID) systems, recognising their potential to foster a fair, transparent, and well-governed society by ensuring the secure verification of citizens' identities. The inclusive nature of NeID empowers people to exercise their rights while holding them accountable for fulfilling their obligations. Nevertheless, the development and implementation of these complex identity-verification systems have raised concerns regarding security, privacy, and exclusion. In this study, we discuss the different categories of NeID risk and explore the successful deployment of these systems, while examining how the specific risks and other challenges posed by this technology are addressed. Based on the review of the different NeID systems and the efforts made to mitigate the unique risks and challenges presented within each deployment, we highlighted the best practices for mitigating risk, including implementing strong security measures, conducting regular risk assessments, and involving stakeholders in the design and implementation of the system.
Related papers
- A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis [0.6199770411242359]
This paper presents a novel human-centered risk evaluation framework using conjoint analysis to quantify the impact of risk factors, such as surveillance cameras, on attacker's motivation.
Our framework calculates risk values incorporating the False Acceptance Rate (FAR) and attack probability, allowing comprehensive comparisons across use cases.
arXiv Detail & Related papers (2024-09-17T14:18:21Z) - EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Cross-Modality Safety Alignment [73.8765529028288]
We introduce a novel safety alignment challenge called Safe Inputs but Unsafe Output (SIUO) to evaluate cross-modality safety alignment.
To empirically investigate this problem, we developed the SIUO, a cross-modality benchmark encompassing 9 critical safety domains, such as self-harm, illegal activities, and privacy violations.
Our findings reveal substantial safety vulnerabilities in both closed- and open-source LVLMs, underscoring the inadequacy of current models to reliably interpret and respond to complex, real-world scenarios.
arXiv Detail & Related papers (2024-06-21T16:14:15Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Affirmative safety: An approach to risk management for high-risk AI [6.133009503054252]
We argue that entities developing or deploying high-risk AI systems should be required to present evidence of affirmative safety.
We propose a risk management approach for advanced AI in which model developers must provide evidence that their activities keep certain risks below regulator-set thresholds.
arXiv Detail & Related papers (2024-04-14T20:48:55Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - An Impact and Risk Assessment Framework for National Electronic Identity (eID) Systems [8.93312157123729]
We propose a framework that considers a wide range of factors, including the social, economic, and political contexts.
This provides a holistic platform for a better assessment of risk to the eID system.
arXiv Detail & Related papers (2023-10-24T12:33:10Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Quantitative AI Risk Assessments: Opportunities and Challenges [9.262092738841979]
AI-based systems are increasingly being leveraged to provide value to organizations, individuals, and society.
Risks have led to proposed regulations, litigation, and general societal concerns.
This paper explores the concept of a quantitative AI Risk Assessment.
arXiv Detail & Related papers (2022-09-13T21:47:25Z) - Biometrics: Trust, but Verify [49.9641823975828]
Biometric recognition has exploded into a plethora of different applications around the globe.
There are a number of outstanding problems and concerns pertaining to the various sub-modules of biometric recognition systems.
arXiv Detail & Related papers (2021-05-14T03:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.