Is Neuron Coverage Needed to Make Person Detection More Robust?
- URL: http://arxiv.org/abs/2204.10027v1
- Date: Thu, 21 Apr 2022 11:23:33 GMT
- Title: Is Neuron Coverage Needed to Make Person Detection More Robust?
- Authors: Svetlana Pavlitskaya, \c{S}iyar Y{\i}km{\i}\c{s} and J. Marius
Z\"ollner
- Abstract summary: In this work, we apply coverage-guided testing (CGT) to the task of person detection in crowded scenes.
The proposed pipeline uses YOLOv3 for person detection and includes finding bugs via sampling and mutation.
We have found no evidence that the investigated coverage metrics can be advantageously used to improve robustness.
- Score: 3.395452700023097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing use of deep neural networks (DNNs) in safety- and
security-critical areas like autonomous driving raises the need for their
systematic testing. Coverage-guided testing (CGT) is an approach that applies
mutation or fuzzing according to a predefined coverage metric to find inputs
that cause misbehavior. With the introduction of a neuron coverage metric, CGT
has also recently been applied to DNNs. In this work, we apply CGT to the task
of person detection in crowded scenes. The proposed pipeline uses YOLOv3 for
person detection and includes finding DNN bugs via sampling and mutation, and
subsequent DNN retraining on the updated training set. To be a bug, we require
a mutated image to cause a significant performance drop compared to a clean
input. In accordance with the CGT, we also consider an additional requirement
of increased coverage in the bug definition. In order to explore several types
of robustness, our approach includes natural image transformations,
corruptions, and adversarial examples generated with the Daedalus attack. The
proposed framework has uncovered several thousand cases of incorrect DNN
behavior. The relative change in mAP performance of the retrained models
reached on average between 26.21\% and 64.24\% for different robustness types.
However, we have found no evidence that the investigated coverage metrics can
be advantageously used to improve robustness.
Related papers
- Exact Certification of (Graph) Neural Networks Against Label Poisoning [50.87615167799367]
We introduce an exact certification method for label flipping in Graph Neural Networks (GNNs)
We apply our method to certify a broad range of GNN architectures in node classification tasks.
Our work presents the first exact certificate to a poisoning attack ever derived for neural networks.
arXiv Detail & Related papers (2024-11-30T17:05:12Z) - Augmented Neural Fine-Tuning for Efficient Backdoor Purification [16.74156528484354]
Recent studies have revealed the vulnerability of deep neural networks (DNNs) to various backdoor attacks.
We propose Neural mask Fine-Tuning (NFT) with an aim to optimally re-organize the neuron activities.
NFT relaxes the trigger synthesis process and eliminates the requirement of the adversarial search module.
arXiv Detail & Related papers (2024-07-14T02:36:54Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - OccRob: Efficient SMT-Based Occlusion Robustness Verification of Deep
Neural Networks [7.797299214812479]
Occlusion is a prevalent and easily realizable semantic perturbation to deep neural networks (DNNs)
It can fool a DNN into misclassifying an input image by occluding some segments, possibly resulting in severe errors.
Most existing robustness verification approaches for DNNs are focused on non-semantic perturbations.
arXiv Detail & Related papers (2023-01-27T18:54:00Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Fighting COVID-19 in the Dark: Methodology for Improved Inference Using
Homomorphically Encrypted DNN [3.1959970303072396]
homomorphic encryption (HE) has been used as a method to enable analytics while addressing privacy concerns.
There are several challenges related to the use of HE, including size limitations and the lack of support for some operation types.
We propose a structured methodology to replace ReLU with a quadratic activation.
arXiv Detail & Related papers (2021-11-05T10:04:15Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - Accelerating Robustness Verification of Deep Neural Networks Guided by
Target Labels [8.9960048245668]
Deep Neural Networks (DNNs) have become key components of many safety-critical applications such as autonomous driving and medical diagnosis.
DNNs suffer from poor robustness because of their susceptibility to adversarial examples such that small perturbations to an input result in misprediction.
We propose a novel approach that can accelerate the robustness verification techniques by guiding the verification with target labels.
arXiv Detail & Related papers (2020-07-16T00:51:52Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.