Patch Synthesis for Property Repair of Deep Neural Networks
- URL: http://arxiv.org/abs/2404.01642v2
- Date: Sat, 01 Feb 2025 02:49:12 GMT
- Title: Patch Synthesis for Property Repair of Deep Neural Networks
- Authors: Zhiming Chi, Jianan Ma, Pengfei Yang, Cheng-Chao Huang, Renjue Li, Xiaowei Huang, Lijun Zhang,
- Abstract summary: We introduce PatchPro, a novel patch-based approach for property-level repair of deep neural networks (DNNs)
PatchPro provides specialized repairs for all samples within the robustness neighborhood while maintaining the network's original performance.
Our method incorporates formal verification and a mechanism for allocating patch modules, enabling it to defend against adversarial attacks.
- Score: 15.580097790702508
- License:
- Abstract: Deep neural networks (DNNs) are prone to various dependability issues, such as adversarial attacks, which hinder their adoption in safety-critical domains. Recently, NN repair techniques have been proposed to address these issues while preserving original performance by locating and modifying guilty neurons and their parameters. However, existing repair approaches are often limited to specific data sets and do not provide theoretical guarantees for the effectiveness of the repairs. To address these limitations, we introduce PatchPro, a novel patch-based approach for property-level repair of DNNs, focusing on local robustness. The key idea behind PatchPro is to construct patch modules that, when integrated with the original network, provide specialized repairs for all samples within the robustness neighborhood while maintaining the network's original performance. Our method incorporates formal verification and a heuristic mechanism for allocating patch modules, enabling it to defend against adversarial attacks and generalize to other inputs. PatchPro demonstrates superior efficiency, scalability, and repair success rates compared to existing DNN repair methods, i.e., realizing provable property-level repair for 100% cases across multiple high-dimensional datasets.
Related papers
- Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - A Robust Optimisation Perspective on Counterexample-Guided Repair of
Neural Networks [2.82532357999662]
We show that counterexample-guided repair can be viewed as a robust optimisation algorithm.
We prove termination for more restrained machine learning models and disprove termination in a general setting.
arXiv Detail & Related papers (2023-01-26T19:00:02Z) - Automated Repair of Neural Networks [0.26651200086513094]
We introduce a framework for repairing unsafe NNs w.r.t. safety specification.
Our method is able to search for a new, safe NN representation, by modifying only a few of its weight values.
We perform extensive experiments which demonstrate the capability of our proposed framework to yield safe NNs w.r.t.
arXiv Detail & Related papers (2022-07-17T12:42:24Z) - Defensive Patches for Robust Recognition in the Physical World [111.46724655123813]
Data-end defense improves robustness by operations on input data instead of modifying models.
Previous data-end defenses show low generalization against diverse noises and weak transferability across multiple models.
We propose a defensive patch generation framework to address these problems by helping models better exploit these features.
arXiv Detail & Related papers (2022-04-13T07:34:51Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - ArchRepair: Block-Level Architecture-Oriented Repairing for Deep Neural
Networks [13.661704974188872]
We propose a novel repairing direction for deep neural networks (DNNs) at the block level.
We propose adversarial-aware spectrum analysis for vulnerable block localization.
We also propose the architecture-oriented search-based repairing that relaxes the targeted block to a continuous repairing search space.
arXiv Detail & Related papers (2021-11-26T06:35:15Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Provable Repair of Deep Neural Networks [8.55884254206878]
Deep Neural Networks (DNNs) have grown in popularity over the past decade and are now being used in safety-critical domains such as aircraft collision avoidance.
This paper tackles the problem of correcting a DNN once unsafe behavior is found.
We introduce the provable repair problem, which is the problem of repairing a network N to construct a new network N' that satisfies a given specification.
arXiv Detail & Related papers (2021-04-09T15:03:53Z) - NNrepair: Constraint-based Repair of Neural Network Classifiers [10.129874872336762]
NNrepair is a constraint-based technique for repairing neural network classifiers.
NNrepair first uses fault localization to find potentially faulty network parameters.
It then performs repair using constraint solving to apply small modifications to the parameters to remedy the defects.
arXiv Detail & Related papers (2021-03-23T13:44:01Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.