DeepTPI: Test Point Insertion with Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2206.06975v1
- Date: Tue, 7 Jun 2022 14:13:42 GMT
- Title: DeepTPI: Test Point Insertion with Deep Reinforcement Learning
- Authors: Zhengyuan Shi, Min Li, Sadaf Khan, Liuzheng Wang, Naixing Wang, Yu
Huang, Qiang Xu
- Abstract summary: Test point insertion (TPI) is a widely used technique for testability enhancement.
We propose a novel TPI approach based on deep reinforcement learning (DRL), named DeepTPI.
We show that DeepTPI significantly improves test coverage compared to the commercial DFT tool.
- Score: 6.357061090668433
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Test point insertion (TPI) is a widely used technique for testability
enhancement, especially for logic built-in self-test (LBIST) due to its
relatively low fault coverage. In this paper, we propose a novel TPI approach
based on deep reinforcement learning (DRL), named DeepTPI. Unlike previous
learning-based solutions that formulate the TPI task as a supervised-learning
problem, we train a novel DRL agent, instantiated as the combination of a graph
neural network (GNN) and a Deep Q-Learning network (DQN), to maximize the test
coverage improvement. Specifically, we model circuits as directed graphs and
design a graph-based value network to estimate the action values for inserting
different test points. The policy of the DRL agent is defined as selecting the
action with the maximum value. Moreover, we apply the general node embeddings
from a pre-trained model to enhance node features, and propose a dedicated
testability-aware attention mechanism for the value network. Experimental
results on circuits with various scales show that DeepTPI significantly
improves test coverage compared to the commercial DFT tool. The code of this
work is available at https://github.com/cure-lab/DeepTPI.
Related papers
- BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping [64.8477128397529]
We propose a training-required and training-free test-time adaptation framework.
We maintain a light-weight key-value memory for feature retrieval from instance-agnostic historical samples and instance-aware boosting samples.
We theoretically justify the rationality behind our method and empirically verify its effectiveness on both the out-of-distribution and the cross-domain datasets.
arXiv Detail & Related papers (2024-10-20T15:58:43Z) - IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency [20.61046457594186]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
This paper proposes a simple yet effective input-level backdoor detection (dubbed IBD-PSC) to filter out malicious testing images.
arXiv Detail & Related papers (2024-05-16T03:19:52Z) - Test-Time Training on Graphs with Large Language Models (LLMs) [68.375487369596]
Test-Time Training (TTT) has been proposed as a promising approach to train Graph Neural Networks (GNNs)
Inspired by the great annotation ability of Large Language Models (LLMs) on Text-Attributed Graphs (TAGs), we propose to enhance the test-time training on graphs with LLMs as annotators.
A two-stage training strategy is designed to tailor the test-time model with the limited and noisy labels.
arXiv Detail & Related papers (2024-04-21T08:20:02Z) - Investigation and rectification of NIDS datasets and standratized
feature set derivation for network attack detection with graph neural
networks [0.0]
Graph Neural Networks (GNNs) provide an opportunity to analyze network topology along with flow features.
In this paper we inspect different versions of ToN-IoT dataset and point out inconsistencies in some versions.
We propose a new standardized and compact set of flow features which are derived solely from NetFlowv5-compatible data.
arXiv Detail & Related papers (2022-12-26T07:42:25Z) - Simple Techniques Work Surprisingly Well for Neural Network Test
Prioritization and Active Learning (Replicability Study) [4.987581730476023]
Test Input Prioritizers (TIP) for Deep Neural Networks (DNN) are an important technique to handle the typically very large test datasets efficiently.
Feng et. al. propose DeepGini, a very fast and simple TIP, and show that it outperforms more elaborate techniques such as neuron- and surprise coverage.
arXiv Detail & Related papers (2022-05-02T05:47:34Z) - End-to-End Learning of Deep Kernel Acquisition Functions for Bayesian
Optimization [39.56814839510978]
We propose a meta-learning method for Bayesian optimization with neural network-based kernels.
Our model is trained by a reinforcement learning framework from multiple tasks.
In experiments using three text document datasets, we demonstrate that the proposed method achieves better BO performance than the existing methods.
arXiv Detail & Related papers (2021-11-01T00:42:31Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Edge-Detect: Edge-centric Network Intrusion Detection using Deep Neural
Network [0.0]
Edge nodes are crucial for detection against multitudes of cyber attacks on Internet-of-Things endpoints.
We develop a novel light, fast and accurate 'Edge-Detect' model, which detects Denial of Service attack on edge nodes using DLM techniques.
arXiv Detail & Related papers (2021-02-03T04:24:34Z) - Learning Reasoning Strategies in End-to-End Differentiable Proving [50.9791149533921]
Conditional Theorem Provers learn optimal rule selection strategy via gradient-based optimisation.
We show that Conditional Theorem Provers are scalable and yield state-of-the-art results on the CLUTRR dataset.
arXiv Detail & Related papers (2020-07-13T16:22:14Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.