Evaluation Methodologies in Software Protection Research
- URL: http://arxiv.org/abs/2307.07300v2
- Date: Tue, 30 Apr 2024 19:11:26 GMT
- Title: Evaluation Methodologies in Software Protection Research
- Authors: Bjorn De Sutter, Sebastian Schrittwieser, Bart Coppens, Patrick Kochberger,
- Abstract summary: Man-at-the-end (MATE) attackers have full control over the system on which the attacked software runs.
Both companies and malware authors want to prevent such attacks.
It remains difficult to measure the strength of protections because MATE attackers can reach their goals in many different ways.
- Score: 3.0448872422956437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Man-at-the-end (MATE) attackers have full control over the system on which the attacked software runs, and try to break the confidentiality or integrity of assets embedded in the software. Both companies and malware authors want to prevent such attacks. This has driven an arms race between attackers and defenders, resulting in a plethora of different protection and analysis methods. However, it remains difficult to measure the strength of protections because MATE attackers can reach their goals in many different ways and a universally accepted evaluation methodology does not exist. This survey systematically reviews the evaluation methodologies of papers on obfuscation, a major class of protections against MATE attacks. For 571 papers, we collected 113 aspects of their evaluation methodologies, ranging from sample set types and sizes, over sample treatment, to performed measurements. We provide detailed insights into how the academic state of the art evaluates both the protections and analyses thereon. In summary, there is a clear need for better evaluation methodologies. We identify nine challenges for software protection evaluations, which represent threats to the validity, reproducibility, and interpretation of research results in the context of MATE attacks and formulate a number of concrete recommendations for improving the evaluations reported in future research papers.
Related papers
- Evaluating Copyright Takedown Methods for Language Models [100.38129820325497]
Language models (LMs) derive their capabilities from extensive training on diverse data, including potentially copyrighted material.
This paper introduces the first evaluation of the feasibility and side effects of copyright takedowns for LMs.
We examine several strategies, including adding system prompts, decoding-time filtering interventions, and unlearning approaches.
arXiv Detail & Related papers (2024-06-26T18:09:46Z) - A Malware Classification Survey on Adversarial Attacks and Defences [0.0]
Deep learning models are effective at detecting malware, but are vulnerable to adversarial attacks.
Attacks like this can create malicious files that are resistant to detection, creating a significant cybersecurity risk.
Recent research has seen the development of several adversarial attack and response approaches.
arXiv Detail & Related papers (2023-12-15T09:25:48Z) - A Tale of Unrealized Hope: Hardware Performance Counter Against Cache Attacks [0.76146285961466]
This paper investigates an emerging cache side channel attack defense approach involving the use of hardware performance counters (HPCs)
With numerous proposals and promising reported results, we seek to investigate whether published HPC-based detection methods are evaluated in a proper setting.
arXiv Detail & Related papers (2023-11-17T14:08:47Z) - Defense of Adversarial Ranking Attack in Text Retrieval: Benchmark and
Baseline via Detection [12.244543468021938]
This paper introduces two types of detection tasks for adversarial documents.
A benchmark dataset is established to facilitate the investigation of adversarial ranking defense.
A comprehensive investigation of the performance of several detection baselines is conducted.
arXiv Detail & Related papers (2023-07-31T16:31:24Z) - Detecting Misuse of Security APIs: A Systematic Review [5.329280109719902]
Security Application Programming Interfaces (APIs) are crucial for ensuring software security.
Their misuse introduces vulnerabilities, potentially leading to severe data breaches and substantial financial loss.
This study rigorously reviews the literature on detecting misuse of security APIs to gain a comprehensive understanding of this critical domain.
arXiv Detail & Related papers (2023-06-15T05:53:23Z) - From Adversarial Arms Race to Model-centric Evaluation: Motivating a
Unified Automatic Robustness Evaluation Framework [91.94389491920309]
Textual adversarial attacks can discover models' weaknesses by adding semantic-preserved but misleading perturbations to the inputs.
The existing practice of robustness evaluation may exhibit issues of incomprehensive evaluation, impractical evaluation protocol, and invalid adversarial samples.
We set up a unified automatic robustness evaluation framework, shifting towards model-centric evaluation to exploit the advantages of adversarial attacks.
arXiv Detail & Related papers (2023-05-29T14:55:20Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - A Unified Evaluation of Textual Backdoor Learning: Frameworks and
Benchmarks [72.7373468905418]
We develop an open-source toolkit OpenBackdoor to foster the implementations and evaluations of textual backdoor learning.
We also propose CUBE, a simple yet strong clustering-based defense baseline.
arXiv Detail & Related papers (2022-06-17T02:29:23Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Catch Me if I Can: Detecting Strategic Behaviour in Peer Assessment [61.24399136715106]
We consider the issue of strategic behaviour in various peer-assessment tasks, including peer grading of exams or homeworks and peer review in hiring or promotions.
Our focus is on designing methods for detection of such manipulations.
Specifically, we consider a setting in which agents evaluate a subset of their peers and output rankings that are later aggregated to form a final ordering.
arXiv Detail & Related papers (2020-10-08T15:08:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.