Trust and distrust in electoral technologies: what can we learn from the failure of electronic voting in the Netherlands (2006/07)
- URL: http://arxiv.org/abs/2412.05052v1
- Date: Fri, 06 Dec 2024 14:07:59 GMT
- Title: Trust and distrust in electoral technologies: what can we learn from the failure of electronic voting in the Netherlands (2006/07)
- Authors: David Duenas-Cid,
- Abstract summary: This paper focuses on the complex dynamics of trust and distrust in digital government technologies by approaching the cancellation of machine voting in the Netherlands (2006-07)<n>It describes how a previously trusted system can collapse, how paradoxical the relationship between trust and distrust is, and how it interacts with adopting and managing electoral technologies.<n>Overall, this paper contributes to understanding trust dynamics in digital government technologies, with implications for policymaking and technology adoption strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper focuses on the complex dynamics of trust and distrust in digital government technologies by approaching the cancellation of machine voting in the Netherlands (2006-07). This case describes how a previously trusted system can collapse, how paradoxical the relationship between trust and distrust is, and how it interacts with adopting and managing electoral technologies. The analysis stresses how, although being a central component, technology's trustworthiness dialogues with the socio-technical context in which it is inserted, for example, underscoring the relevance of public administration in securing technological environments. Beyond these insights, the research offers broader reflections on trust and distrust in data-driven technologies, advocating for differentiated strategies for building trust versus managing distrust. Overall, this paper contributes to understanding trust dynamics in digital government technologies, with implications for policymaking and technology adoption strategies.
Related papers
- Is Trust Correlated With Explainability in AI? A Meta-Analysis [0.0]
We conduct a comprehensive examination of the existing literature to explore the relationship between AI explainability and trust.
Our analysis, incorporating data from 90 studies, reveals a statistically significant but moderate positive correlation between the explainability of AI systems and the trust they engender.
This research highlights its broader socio-technical implications, particularly in promoting accountability and fostering user trust in critical domains such as healthcare and justice.
arXiv Detail & Related papers (2025-04-16T23:30:55Z) - The Jade Gateway to Trust: Exploring How Socio-Cultural Perspectives Shape Trust Within Chinese NFT Communities [53.778565588482294]
The emergence of non-fungible tokens (NFTs) has transformed how we handle digital assets and value.
Despite their initial popularity, NFTs face declining adoption influenced not only by cryptocurrency volatility but also by trust dynamics within communities.
Our research identifies three critical trust dimensions in China's NFT market: technological, institutional, and social.
arXiv Detail & Related papers (2025-04-16T10:03:30Z) - "Even explanations will not help in trusting [this] fundamentally biased system": A Predictive Policing Case-Study [8.240854389254222]
The use of AI systems in high-risk domains have often led users to either under-trust it, potentially causing inadequate reliance or over-trust it, resulting in over-compliance.
Past research has indicated that explanations provided by AI systems can enhance user understanding of when to trust or not trust the system.
This study explores the impact of different explanation types and user expertise on establishing appropriate trust in AI-based predictive policing.
arXiv Detail & Related papers (2025-04-15T09:43:48Z) - Media and responsible AI governance: a game-theoretic and LLM analysis [61.132523071109354]
This paper investigates the interplay between AI developers, regulators, users, and the media in fostering trustworthy AI systems.
Using evolutionary game theory and large language models (LLMs), we model the strategic interactions among these actors under different regulatory regimes.
arXiv Detail & Related papers (2025-03-12T21:39:38Z) - Trust in AI: Progress, Challenges, and Future Directions [6.724854390957174]
The increasing use of artificial intelligence (AI) systems in our daily life explains the significance of trust/distrust in AI from a user perspective.
Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion.
arXiv Detail & Related papers (2024-03-12T20:26:49Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Common (good) practices measuring trust in HRI [55.2480439325792]
Trust in robots is widely believed to be imperative for the adoption of robots into people's daily lives.
Researchers have been exploring how people trust robot in different ways.
Most roboticists agree that insufficient levels of trust lead to a risk of disengagement.
arXiv Detail & Related papers (2023-11-20T20:52:10Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - AI Assurance using Causal Inference: Application to Public Policy [0.0]
Most AI approaches can only be represented as "black boxes" and suffer from the lack of transparency.
It is crucial not only to develop effective and robust AI systems, but to make sure their internal processes are explainable and fair.
arXiv Detail & Related papers (2021-12-01T16:03:06Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
Goals of Human Trust in AI [55.4046755826066]
We discuss a model of trust inspired by, but not identical to, sociology's interpersonal trust (i.e., trust between people)
We incorporate a formalization of 'contractual trust', such that trust between a user and an AI is trust that some implicit or explicit contract will hold.
We discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted.
arXiv Detail & Related papers (2020-10-15T03:07:23Z) - Towards a Policy-as-a-Service Framework to Enable Compliant, Trustworthy
AI and HRI Systems in the Wild [7.225523345649149]
Building trustworthy autonomous systems is challenging for many reasons beyond simply trying to engineer agents that 'always do the right thing'
There is a broader context that is often not considered within AI and HRI: that the problem of trustworthiness is inherently socio-technical.
This paper emphasizes the "fuzzy" socio-technical aspects of trustworthiness and the need for their careful consideration during both design and deployment.
arXiv Detail & Related papers (2020-10-06T18:32:31Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.