DeepROCK: Error-controlled interaction detection in deep neural networks
- URL: http://arxiv.org/abs/2309.15319v1
- Date: Tue, 26 Sep 2023 23:58:19 GMT
- Title: DeepROCK: Error-controlled interaction detection in deep neural networks
- Authors: Winston Chen, William Stafford Noble, Yang Young Lu
- Abstract summary: The complexity of deep neural networks (DNNs) makes them powerful but also makes them challenging to interpret.
Existing methods attempt to reason about the internal mechanism of DNNs by identifying feature interactions that influence prediction outcomes.
We introduce a method, called DeepROCK, to address this limitation by using knockoffs, which are dummy variables that are designed to mimic the dependence structure of a given set of features.
- Score: 5.095097384893415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The complexity of deep neural networks (DNNs) makes them powerful but also
makes them challenging to interpret, hindering their applicability in
error-intolerant domains. Existing methods attempt to reason about the internal
mechanism of DNNs by identifying feature interactions that influence prediction
outcomes. However, such methods typically lack a systematic strategy to
prioritize interactions while controlling confidence levels, making them
difficult to apply in practice for scientific discovery and hypothesis
validation. In this paper, we introduce a method, called DeepROCK, to address
this limitation by using knockoffs, which are dummy variables that are designed
to mimic the dependence structure of a given set of features while being
conditionally independent of the response. Together with a novel DNN
architecture involving a pairwise-coupling layer, DeepROCK jointly controls the
false discovery rate (FDR) and maximizes statistical power. In addition, we
identify a challenge in correctly controlling FDR using off-the-shelf feature
interaction importance measures. DeepROCK overcomes this challenge by proposing
a calibration procedure applied to existing interaction importance measures to
make the FDR under control at a target level. Finally, we validate the
effectiveness of DeepROCK through extensive experiments on simulated and real
datasets.
Related papers
- Causal GNNs: A GNN-Driven Instrumental Variable Approach for Causal Inference in Networks [0.0]
CgNN is a novel approach to mitigate hidden confounder bias and improve causal effect estimation.
Our results demonstrate that CgNN effectively mitigates hidden confounder bias and offers a robust GNN-driven IV framework for causal inference in complex network data.
arXiv Detail & Related papers (2024-09-13T05:39:00Z) - Targeted Cause Discovery with Data-Driven Learning [66.86881771339145]
We propose a novel machine learning approach for inferring causal variables of a target variable from observations.
We employ a neural network trained to identify causality through supervised learning on simulated data.
Empirical results demonstrate the effectiveness of our method in identifying causal relationships within large-scale gene regulatory networks.
arXiv Detail & Related papers (2024-08-29T02:21:11Z) - DeepFDR: A Deep Learning-based False Discovery Rate Control Method for
Neuroimaging Data [1.9207817188259122]
Voxel-based multiple testing is widely used in neuroimaging data analysis.
Traditional false discovery rate (FDR) control methods ignore the spatial dependence among the voxel-based tests.
DeepFDR uses unsupervised deep learning-based image segmentation to address the voxel-based multiple testing problem.
arXiv Detail & Related papers (2023-10-20T08:27:13Z) - Adversarial Training Using Feedback Loops [1.6114012813668932]
Deep neural networks (DNNs) are highly susceptible to adversarial attacks due to limited generalizability.
This paper proposes a new robustification approach based on control theory.
The novel adversarial training approach based on the feedback control architecture is called Feedback Looped Adversarial Training (FLAT)
arXiv Detail & Related papers (2023-08-23T02:58:02Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Towards Adversarial-Resilient Deep Neural Networks for False Data
Injection Attack Detection in Power Grids [7.351477761427584]
False data injection attacks (FDIAs) pose a significant security threat to power system state estimation.
Recent studies have proposed machine learning (ML) techniques, particularly deep neural networks (DNNs)
arXiv Detail & Related papers (2021-02-17T22:26:34Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - A Compact Deep Learning Model for Face Spoofing Detection [4.250231861415827]
presentation attack detection (PAD) has received significant attention from research communities.
We address the problem via fusing both wide and deep features in a unified neural architecture.
The procedure is done on different spoofing datasets such as ROSE-Youtu, SiW and NUAA Imposter.
arXiv Detail & Related papers (2021-01-12T21:20:09Z) - Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness [97.67477497115163]
We use mode connectivity to study the adversarial robustness of deep neural networks.
Our experiments cover various types of adversarial attacks applied to different network architectures and datasets.
Our results suggest that mode connectivity offers a holistic tool and practical means for evaluating and improving adversarial robustness.
arXiv Detail & Related papers (2020-04-30T19:12:50Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.