ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via
Synthesis Tuning
- URL: http://arxiv.org/abs/2303.03372v1
- Date: Mon, 6 Mar 2023 18:55:58 GMT
- Title: ALMOST: Adversarial Learning to Mitigate Oracle-less ML Attacks via
Synthesis Tuning
- Authors: Animesh Basak Chowdhury, Lilas Alrahis, Luca Collini, Johann Knechtel,
Ramesh Karri, Siddharth Garg, Ozgur Sinanoglu, Benjamin Tan
- Abstract summary: Oracle-less machine learning (ML) attacks have broken various logic locking schemes.
We propose ALMOST, a framework for adversarial learning to mitigate oracle-less ML attacks via synthesis tuning.
- Score: 18.758747687330384
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Oracle-less machine learning (ML) attacks have broken various logic locking
schemes. Regular synthesis, which is tailored for area-power-delay
optimization, yields netlists where key-gate localities are vulnerable to
learning. Thus, we call for security-aware logic synthesis. We propose ALMOST,
a framework for adversarial learning to mitigate oracle-less ML attacks via
synthesis tuning. ALMOST uses a simulated-annealing-based synthesis recipe
generator, employing adversarially trained models that can predict
state-of-the-art attacks' accuracies over wide ranges of recipes and key-gate
localities. Experiments on ISCAS benchmarks confirm the attacks' accuracies
drops to around 50\% for ALMOST-synthesized circuits, all while not undermining
design optimization.
Related papers
- DECOR: Enhancing Logic Locking Against Machine Learning-Based Attacks [0.6131022957085439]
Logic locking (LL) has gained attention as a promising intellectual property protection measure for integrated circuits.
Recent attacks, facilitated by machine learning (ML), have shown the potential to predict the correct key in multiple LL schemes.
This paper presents a generic LL enhancement method based on a randomized algorithm that can significantly decrease the correlation between locked circuit netlist and correct key values in an LL scheme.
arXiv Detail & Related papers (2024-03-04T07:31:23Z) - Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal [49.24054920683246]
Large language models (LLMs) suffer from catastrophic forgetting during continual learning.
We propose a framework called Self-Synthesized Rehearsal (SSR) that uses the LLM to generate synthetic instances for rehearsal.
arXiv Detail & Related papers (2024-03-02T16:11:23Z) - Retrieval-Guided Reinforcement Learning for Boolean Circuit Minimization [23.075466444266528]
This study conducts a thorough examination of learning and search techniques for logic synthesis.
We present ABC-RL, a meticulously tuned $alpha$ parameter that adeptly adjusts recommendations from pre-trained agents during the search process.
Our findings showcase substantial enhancements in the Quality-of-result (QoR) of synthesized circuits, boasting improvements of up to 24.8% compared to state-of-the-art techniques.
arXiv Detail & Related papers (2024-01-22T18:46:30Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - INVICTUS: Optimizing Boolean Logic Circuit Synthesis via Synergistic
Learning and Search [18.558280701880136]
State-of-the-art logic synthesis algorithms have a large number of logic minimizations.
INVICTUS generates a sequence of logic minimizations based on a training dataset of previously seen designs.
arXiv Detail & Related papers (2023-05-22T15:50:42Z) - Reconstruction-based LSTM-Autoencoder for Anomaly-based DDoS Attack
Detection over Multivariate Time-Series Data [6.642599588462097]
A Distributed Denial-of-service (DDoS) attack is a malicious attempt to disrupt the regular traffic of a targeted server, service, or network by sending a flood of traffic to overwhelm the target or its surrounding infrastructure.
Traditional statistical and shallow machine learning techniques can detect superficial anomalies based on shallow data and feature selection, however, these approaches cannot detect unseen DDoS attacks.
We propose a reconstruction-based anomaly detection model named LSTM-Autoencoder (LSTM-AE) which combines two deep learning-based models for detecting DDoS attack anomalies.
arXiv Detail & Related papers (2023-04-21T03:56:03Z) - Versatile Weight Attack via Flipping Limited Bits [68.45224286690932]
We study a novel attack paradigm, which modifies model parameters in the deployment stage.
Considering the effectiveness and stealthiness goals, we provide a general formulation to perform the bit-flip based weight attack.
We present two cases of the general formulation with different malicious purposes, i.e., single sample attack (SSA) and triggered samples attack (TSA)
arXiv Detail & Related papers (2022-07-25T03:24:58Z) - Fast Distributionally Robust Learning with Variance Reduced Min-Max
Optimization [85.84019017587477]
Distributionally robust supervised learning is emerging as a key paradigm for building reliable machine learning systems for real-world applications.
Existing algorithms for solving Wasserstein DRSL involve solving complex subproblems or fail to make use of gradients.
We revisit Wasserstein DRSL through the lens of min-max optimization and derive scalable and efficiently implementable extra-gradient algorithms.
arXiv Detail & Related papers (2021-04-27T16:56:09Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Challenging the Security of Logic Locking Schemes in the Era of Deep
Learning: A Neuroevolutionary Approach [0.2982610402087727]
Deep learning is being introduced in the domain of logic locking.
We present SnapShot: a novel attack on logic locking that is the first of its kind to utilize artificial neural networks.
We show that SnapShot achieves an average key prediction accuracy of 82.60% for the selected attack scenario.
arXiv Detail & Related papers (2020-11-20T13:03:19Z) - Revisiting LSTM Networks for Semi-Supervised Text Classification via
Mixed Objective Function [106.69643619725652]
We develop a training strategy that allows even a simple BiLSTM model, when trained with cross-entropy loss, to achieve competitive results.
We report state-of-the-art results for text classification task on several benchmark datasets.
arXiv Detail & Related papers (2020-09-08T21:55:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.