The Invisible Game on the Internet: A Case Study of Decoding Deceptive Patterns
- URL: http://arxiv.org/abs/2402.03569v1
- Date: Mon, 5 Feb 2024 22:42:59 GMT
- Title: The Invisible Game on the Internet: A Case Study of Decoding Deceptive Patterns
- Authors: Zewei Shi, Ruoxi Sun, Jieshan Chen, Jiamou Sun, Minhui Xue,
- Abstract summary: Deceptive patterns are design practices embedded in digital platforms to manipulate users.
Despite advancements in detection tools, a significant gap exists in assessing deceptive pattern risks.
- Score: 19.55209153462331
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deceptive patterns are design practices embedded in digital platforms to manipulate users, representing a widespread and long-standing issue in the web and mobile software development industry. Legislative actions highlight the urgency of globally regulating deceptive patterns. However, despite advancements in detection tools, a significant gap exists in assessing deceptive pattern risks. In this study, we introduce a comprehensive approach involving the interactions between the Adversary, Watchdog (e.g., detection tools), and Challengers (e.g., users) to formalize and decode deceptive pattern threats. Based on this, we propose a quantitative risk assessment system. Representative cases are analyzed to showcase the practicability of the proposed risk scoring system, emphasizing the importance of involving human factors in deceptive pattern risk assessment.
Related papers
- Visually Analyze SHAP Plots to Diagnose Misclassifications in ML-based Intrusion Detection [0.3199881502576702]
Intrusion detection system (IDS) can essentially mitigate threats by providing alerts.
In order to detect these threats various machine learning (ML) and deep learning (DL) models have been proposed.
In this paper, we propose an explainable artificial intelligence (XAI) based visual analysis approach using overlapping SHAP plots.
arXiv Detail & Related papers (2024-11-04T23:08:34Z) - A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis [0.6199770411242359]
This paper presents a novel human-centered risk evaluation framework using conjoint analysis to quantify the impact of risk factors, such as surveillance cameras, on attacker's motivation.
Our framework calculates risk values incorporating the False Acceptance Rate (FAR) and attack probability, allowing comprehensive comparisons across use cases.
arXiv Detail & Related papers (2024-09-17T14:18:21Z) - A Survey and Evaluation of Adversarial Attacks for Object Detection [11.48212060875543]
Deep learning models excel in various computer vision tasks but are susceptible to adversarial examples-subtle perturbations in input data that lead to incorrect predictions.
This vulnerability poses significant risks in safety-critical applications such as autonomous vehicles, security surveillance, and aircraft health monitoring.
arXiv Detail & Related papers (2024-08-04T05:22:08Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Against Algorithmic Exploitation of Human Vulnerabilities [2.6918074738262194]
We are concerned with the problem of machine learning models inadvertently modelling vulnerabilities.
We describe common vulnerabilities, and illustrate cases where they are likely to play a role in algorithmic decision-making.
We propose a set of requirements for methods to detect the potential for vulnerability modelling.
arXiv Detail & Related papers (2023-01-12T13:15:24Z) - Adversarial Machine Learning In Network Intrusion Detection Domain: A
Systematic Review [0.0]
It has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions.
This survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection.
arXiv Detail & Related papers (2021-12-06T19:10:23Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.