Busting the Paper Ballot: Voting Meets Adversarial Machine Learning
- URL: http://arxiv.org/abs/2506.14582v1
- Date: Tue, 17 Jun 2025 14:38:08 GMT
- Title: Busting the Paper Ballot: Voting Meets Adversarial Machine Learning
- Authors: Kaleel Mahmood, Caleb Manicke, Ethan Rathbun, Aayushi Verma, Sohaib Ahmad, Nicholas Stamatakis, Laurent Michel, Benjamin Fuller,
- Abstract summary: We show the security risk associated with using machine learning in U.S. election tabulators.<n>We train and test a variety of different models on four new ballot datasets.<n>We conduct attacks with the adversarial examples generated using our new methods.
- Score: 2.31948340923087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We show the security risk associated with using machine learning classifiers in United States election tabulators. The central classification task in election tabulation is deciding whether a mark does or does not appear on a bubble associated to an alternative in a contest on the ballot. Barretto et al. (E-Vote-ID 2021) reported that convolutional neural networks are a viable option in this field, as they outperform simple feature-based classifiers. Our contributions to election security can be divided into four parts. To demonstrate and analyze the hypothetical vulnerability of machine learning models on election tabulators, we first introduce four new ballot datasets. Second, we train and test a variety of different models on our new datasets. These models include support vector machines, convolutional neural networks (a basic CNN, VGG and ResNet), and vision transformers (Twins and CaiT). Third, using our new datasets and trained models, we demonstrate that traditional white box attacks are ineffective in the voting domain due to gradient masking. Our analyses further reveal that gradient masking is a product of numerical instability. We use a modified difference of logits ratio loss to overcome this issue (Croce and Hein, ICML 2020). Fourth, in the physical world, we conduct attacks with the adversarial examples generated using our new methods. In traditional adversarial machine learning, a high (50% or greater) attack success rate is ideal. However, for certain elections, even a 5% attack success rate can flip the outcome of a race. We show such an impact is possible in the physical domain. We thoroughly discuss attack realism, and the challenges and practicality associated with printing and scanning ballot adversarial examples.
Related papers
- Scaling Crowdsourced Election Monitoring: Construction and Evaluation of Classification Models for Multilingual and Cross-Domain Classification Settings [0.0]
We propose a two-step classification approach of first identifying informative reports and then categorising them into distinct information types.<n>We conduct classification experiments using multilingual transformer models such as XLM-RoBERTa and multilingual embeddings such as SBERT.<n>Our results show promising potential for model transfer across electoral domains, with F1-Scores of 59% in zero-shot and 63% in few-shot settings.
arXiv Detail & Related papers (2025-03-05T15:17:18Z) - Cross-Model Transferability of Adversarial Patches in Real-time Segmentation for Autonomous Driving [0.2120527246868857]
Adrial attacks pose a significant threat to deep learning models, particularly in safety-critical applications like healthcare and autonomous driving.<n>Recently, patch based attacks have demonstrated effectiveness in real-time inference scenarios owing to their 'drag and drop' nature.<n>Here we propose a novel Expectation Over Transformation (EOT) based adversarial patch attack that is more realistic for autonomous vehicles.
arXiv Detail & Related papers (2025-02-22T00:03:53Z) - Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - Data as voters: instance selection using approval-based multi-winner voting [1.597617022056624]
We present a novel approach to the instance selection problem in machine learning (or data mining)
In our model, instances play a double role as voters and candidates.
For SVMs, we have obtained slight increases in the average accuracy by using several voting rules that satisfy EJR or PJR.
arXiv Detail & Related papers (2023-04-19T22:00:23Z) - Verifying the Robustness of Automatic Credibility Assessment [50.55687778699995]
We show that meaning-preserving changes in input text can mislead the models.
We also introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Privacy Attacks Against Biometric Models with Fewer Samples:
Incorporating the Output of Multiple Models [2.1874132949602654]
Authentication systems are vulnerable to model inversion attacks where an adversary approximates the inverse of a target machine learning model.
This is because inverting a biometric model allows the attacker to produce a realistic biometric input to spoof biometric authentication systems.
We propose a new technique that drastically reduces the amount of training data necessary for model inversion attacks.
arXiv Detail & Related papers (2022-09-22T14:00:43Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Predicting Propensity to Vote with Machine Learning [0.0]
We demonstrate that machine learning enables the capability to infer an individual's propensity to vote from their past actions and attributes.
This is useful for microtargeting voter outreach, voter education and get-out-the-vote (GOVT) campaigns.
arXiv Detail & Related papers (2021-02-02T15:04:39Z) - Local Black-box Adversarial Attacks: A Query Efficient Approach [64.98246858117476]
Adrial attacks have threatened the application of deep neural networks in security-sensitive scenarios.
We propose a novel framework to perturb the discriminative areas of clean examples only within limited queries in black-box attacks.
We conduct extensive experiments to show that our framework can significantly improve the query efficiency during black-box perturbing with a high attack success rate.
arXiv Detail & Related papers (2021-01-04T15:32:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.