Information Removal at the bottleneck in Deep Neural Networks
- URL: http://arxiv.org/abs/2210.00891v1
- Date: Fri, 30 Sep 2022 14:20:21 GMT
- Title: Information Removal at the bottleneck in Deep Neural Networks
- Authors: Enzo Tartaglione
- Abstract summary: We propose IRENE, a method to achieve information removal at the bottleneck of deep neural networks.
Experiments on a synthetic dataset and on CelebA validate the effectiveness of the proposed approach.
- Score: 3.1473798197405944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models are nowadays broadly deployed to solve an incredibly
large variety of tasks. Commonly, leveraging over the availability of "big
data", deep neural networks are trained as black-boxes, minimizing an objective
function at its output. This however does not allow control over the
propagation of some specific features through the model, like gender or race,
for solving some an uncorrelated task. This raises issues either in the privacy
domain (considering the propagation of unwanted information) and of bias
(considering that these features are potentially used to solve the given task).
In this work we propose IRENE, a method to achieve information removal at the
bottleneck of deep neural networks, which explicitly minimizes the estimated
mutual information between the features to be kept ``private'' and the target.
Experiments on a synthetic dataset and on CelebA validate the effectiveness of
the proposed approach, and open the road towards the development of approaches
guaranteeing information removal in deep neural networks.
Related papers
- Feature Selection for Network Intrusion Detection [3.7414804164475983]
We present a novel information-theoretic method that facilitates the exclusion of non-informative features when detecting network intrusions.
The proposed method is based on function approximation using a neural network, which enables a version of our approach that incorporates a recurrent layer.
arXiv Detail & Related papers (2024-11-18T14:25:55Z) - Perturbation on Feature Coalition: Towards Interpretable Deep Neural Networks [0.1398098625978622]
The "black box" nature of deep neural networks (DNNs) compromises their transparency and reliability.
We introduce a perturbation-based interpretation guided by feature coalitions, which leverages deep information of network to extract correlated features.
arXiv Detail & Related papers (2024-08-23T22:44:21Z) - Zonotope Domains for Lagrangian Neural Network Verification [102.13346781220383]
We decompose the problem of verifying a deep neural network into the verification of many 2-layer neural networks.
Our technique yields bounds that improve upon both linear programming and Lagrangian-based verification techniques.
arXiv Detail & Related papers (2022-10-14T19:31:39Z) - Explainable Deep Belief Network based Auto encoder using novel Extended
Garson Algorithm [6.228766191647919]
We develop an algorithm to explain Deep Belief Network based Auto-encoder (DBNA)
It is used to determine the contribution of each input feature in the DBN.
Important features identified by this method are compared against those obtained by Wald chi square (chi2)
arXiv Detail & Related papers (2022-07-18T10:44:02Z) - Efficacy of Bayesian Neural Networks in Active Learning [11.609770399591516]
We show that Bayesian neural networks are more efficient than ensemble based techniques in capturing uncertainty.
Our findings also reveal some key drawbacks of the ensemble techniques, which was recently shown to be more effective than Monte Carlo dropouts.
arXiv Detail & Related papers (2021-04-02T06:02:11Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Forgetting Outside the Box: Scrubbing Deep Networks of Information
Accessible from Input-Output Observations [143.3053365553897]
We describe a procedure for removing dependency on a cohort of training data from a trained deep network.
We introduce a new bound on how much information can be extracted per query about the forgotten cohort.
We exploit the connections between the activation and weight dynamics of a DNN inspired by Neural Tangent Kernels to compute the information in the activations.
arXiv Detail & Related papers (2020-03-05T23:17:35Z) - Hold me tight! Influence of discriminative features on deep network
boundaries [63.627760598441796]
We propose a new perspective that relates dataset features to the distance of samples to the decision boundary.
This enables us to carefully tweak the position of the training samples and measure the induced changes on the boundaries of CNNs trained on large-scale vision datasets.
arXiv Detail & Related papers (2020-02-15T09:29:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.