Collective Obfuscation and Crowdsourcing
- URL: http://arxiv.org/abs/2208.06405v1
- Date: Fri, 12 Aug 2022 17:57:33 GMT
- Title: Collective Obfuscation and Crowdsourcing
- Authors: Benjamin Laufer, Niko A. Grupen
- Abstract summary: We show that widespread use of reporting platforms comes with unique security and privacy implications.
We identify coordinated obfuscation strategies that are intended to hinder the platform's legitimacy.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Crowdsourcing technologies rely on groups of people to input information that
may be critical for decision-making. This work examines obfuscation in the
context of reporting technologies. We show that widespread use of reporting
platforms comes with unique security and privacy implications, and introduce a
threat model and corresponding taxonomy to outline some of the many attack
vectors in this space. We then perform an empirical analysis of a dataset of
call logs from a controversial, real-world reporting hotline and identify
coordinated obfuscation strategies that are intended to hinder the platform's
legitimacy. We propose a variety of statistical measures to quantify the
strength of this obfuscation strategy with respect to the structural and
semantic characteristics of the reporting attacks in our dataset.
Related papers
- Evaluating Selective Encryption Against Gradient Inversion Attacks [15.000605214632243]
Gradient inversion attacks pose significant privacy threats to distributed training frameworks such as federated learning.<n>This paper systematically evaluates selective encryption methods with different significance metrics against state-of-the-art attacks.
arXiv Detail & Related papers (2025-08-06T07:31:43Z) - DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective [59.66984417026933]
We introduce a novel taxonomy, classifying existing methods based on their reliance on internal features (IF) (inherent to the data) versus external features (EF) (artificially introduced for auditing)<n>We formulate two primary attack types: evasion attacks, designed to conceal the use of a dataset, and forgery attacks, intending to falsely implicate an unused dataset.<n>Building on the understanding of existing methods and attack objectives, we further propose systematic attack strategies: decoupling, removal, and detection for evasion; adversarial example-based methods for forgery.<n>Our benchmark, DATABench, comprises 17 evasion attacks, 5 forgery attacks, and 9
arXiv Detail & Related papers (2025-07-08T03:07:15Z) - A Survey on Model Extraction Attacks and Defenses for Large Language Models [55.60375624503877]
Model extraction attacks pose significant security threats to deployed language models.<n>This survey provides a comprehensive taxonomy of extraction attacks and defenses, categorizing attacks into functionality extraction, training data extraction, and prompt-targeted attacks.<n>We examine defense mechanisms organized into model protection, data privacy protection, and prompt-targeted strategies, evaluating their effectiveness across different deployment scenarios.
arXiv Detail & Related papers (2025-06-26T22:02:01Z) - Beyond Dataset Creation: Critical View of Annotation Variation and Bias Probing of a Dataset for Online Radical Content Detection [5.509174347606238]
We introduce a publicly available multilingual dataset annotated with radicalization levels, calls for action, and named entities in English, French, and Arabic.
This dataset is pseudonymized to protect individual privacy while preserving contextual information.
We analyze the annotation process, highlighting biases and disagreements among annotators and their implications for model performance.
arXiv Detail & Related papers (2024-12-16T13:03:43Z) - Addressing Key Challenges of Adversarial Attacks and Defenses in the Tabular Domain: A Methodological Framework for Coherence and Consistency [26.645723217188323]
In this paper, we propose new evaluation criteria tailored for adversarial attacks in the tabular domain.
We also introduce a novel technique for perturbing dependent features while maintaining coherence and feature consistency within the sample.
The findings provide valuable insights on the strengths, limitations, and trade-offs of various adversarial attacks in the tabular domain.
arXiv Detail & Related papers (2024-12-10T09:17:09Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - AnnoCTR: A Dataset for Detecting and Linking Entities, Tactics, and Techniques in Cyber Threat Reports [3.6785107661544805]
We present AnnoCTR, a new CC-BY-SA-licensed dataset of cyber threat reports.
The reports have been annotated by a domain expert with named entities, temporal expressions, and cybersecurity-specific concepts.
In our few-shot scenario, we find that for identifying the MITRE ATT&CK concepts that are mentioned explicitly or implicitly in a text, concept descriptions from MITRE ATT&CK are an effective source for training data augmentation.
arXiv Detail & Related papers (2024-04-11T14:04:36Z) - Unveiling Safety Vulnerabilities of Large Language Models [4.562678399685183]
This paper introduces a unique dataset containing adversarial examples in the form of questions, which we call AttaQ.
We assess the efficacy of our dataset by analyzing the vulnerabilities of various models when subjected to it.
We introduce a novel automatic approach for identifying and naming vulnerable semantic regions.
arXiv Detail & Related papers (2023-11-07T16:50:33Z) - User-Centered Security in Natural Language Processing [0.7106986689736825]
dissertation proposes a framework of user-centered security in Natural Language Processing (NLP)
It focuses on two security domains within NLP with great public interest.
arXiv Detail & Related papers (2023-01-10T22:34:19Z) - Multi-features based Semantic Augmentation Networks for Named Entity
Recognition in Threat Intelligence [7.321994923276344]
We propose a semantic augmentation method which incorporates different linguistic features to enrich the representation of input tokens.
In particular, we encode and aggregate the constituent feature, morphological feature and part of speech feature for each input token to improve the robustness of the method.
We have conducted experiments on the cybersecurity datasets DNRTI and MalwareTextDB, and the results demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-07-01T06:55:12Z) - A Unified Evaluation of Textual Backdoor Learning: Frameworks and
Benchmarks [72.7373468905418]
We develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning.
We also propose CUBE, a simple yet strong clustering-based defense baseline.
arXiv Detail & Related papers (2022-06-17T02:29:23Z) - Learning-based Hybrid Local Search for the Hard-label Textual Attack [53.92227690452377]
We consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker could only access the prediction label.
Based on this observation, we propose a novel hard-label attack, called Learning-based Hybrid Local Search (LHLS) algorithm.
Our LHLS significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2022-01-20T14:16:07Z) - Detecting adversaries in Crowdsourcing [71.20185379303479]
This work investigates the effects of adversaries on crowdsourced classification, under the popular Dawid and Skene model.
The adversaries are allowed to deviate arbitrarily from the considered crowdsourcing model, and may potentially cooperate.
We develop an approach that leverages the structure of second-order moments of annotator responses, to identify large numbers of adversaries, and mitigate their impact on the crowdsourcing task.
arXiv Detail & Related papers (2021-10-07T15:07:07Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Automated Retrieval of ATT&CK Tactics and Techniques for Cyber Threat
Reports [5.789368942487406]
We evaluate several classification approaches to automatically retrieve Tactics, Techniques and Procedures from unstructured text.
We present rcATT, a tool built on top of our findings and freely distributed to the security community to support cyber threat report automated analysis.
arXiv Detail & Related papers (2020-04-29T16:45:14Z) - Adversarial Augmentation Policy Search for Domain and Cross-Lingual
Generalization in Reading Comprehension [96.62963688510035]
Reading comprehension models often overfit to nuances of training datasets and fail at adversarial evaluation.
We present several effective adversaries and automated data augmentation policy search methods with the goal of making reading comprehension models more robust to adversarial evaluation.
arXiv Detail & Related papers (2020-04-13T17:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.