Poster: Link between Bias, Node Sensitivity and Long-Tail Distribution
in trained DNNs
- URL: http://arxiv.org/abs/2303.16589v2
- Date: Mon, 3 Apr 2023 10:42:20 GMT
- Title: Poster: Link between Bias, Node Sensitivity and Long-Tail Distribution
in trained DNNs
- Authors: Mahum Naseer and Muhammad Shafique
- Abstract summary: Training datasets with long-tail distribution pose a challenge for deep neural networks (DNNs)
This work identifies the node bias that leads to a varying sensitivity of the nodes for different output classes.
We support our reasoning using an empirical case study of the networks trained on a real-world dataset.
- Score: 12.404169549562523
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Owing to their remarkable learning (and relearning) capabilities, deep neural
networks (DNNs) find use in numerous real-world applications. However, the
learning of these data-driven machine learning models is generally as good as
the data available to them for training. Hence, training datasets with
long-tail distribution pose a challenge for DNNs, since the DNNs trained on
them may provide a varying degree of classification performance across
different output classes. While the overall bias of such networks is already
highlighted in existing works, this work identifies the node bias that leads to
a varying sensitivity of the nodes for different output classes. To the best of
our knowledge, this is the first work highlighting this unique challenge in
DNNs, discussing its probable causes, and providing open challenges for this
new research direction. We support our reasoning using an empirical case study
of the networks trained on a real-world dataset.
Related papers
- IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks [68.6374698896505]
Graph Neural Networks (GNNs) have been increasingly deployed in a plethora of applications.
Privacy leakage may happen when the trained GNNs are deployed and exposed to potential attackers.
We propose a principled framework named IDEA to achieve flexible and certified unlearning for GNNs.
arXiv Detail & Related papers (2024-07-28T04:59:59Z) - Making a Spiking Net Work: Robust brain-like unsupervised machine
learning [0.0]
Spiking Neural Networks (SNNs) are an alternative to Artificial Neural Networks (ANNs)
SNNs struggle with dynamical stability and cannot match the accuracy of ANNs.
We show how an SNN can overcome many of the shortcomings that have been identified in the literature.
arXiv Detail & Related papers (2022-08-02T02:10:00Z) - Generalizing Neural Networks by Reflecting Deviating Data in Production [15.498447555957773]
We present a runtime approach that mitigates DNN mis-predictions caused by unexpected runtime inputs to the DNN.
We use a distribution analyzer based on the distance metric learned by a Siamese network to identify "unseen" semantically-preserving inputs.
Our approach transforms those unexpected inputs into inputs from the training set that are identified as having similar semantics.
arXiv Detail & Related papers (2021-10-06T13:05:45Z) - EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks [29.974829042502375]
We develop a framework named EDITS to mitigate the bias in attributed networks.
EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks.
arXiv Detail & Related papers (2021-08-11T14:07:01Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - A Practical Tutorial on Graph Neural Networks [49.919443059032226]
Graph neural networks (GNNs) have recently grown in popularity in the field of artificial intelligence (AI)
This tutorial exposes the power and novelty of GNNs to AI practitioners.
arXiv Detail & Related papers (2020-10-11T12:36:17Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Neural Additive Models: Interpretable Machine Learning with Neural Nets [77.66871378302774]
Deep neural networks (DNNs) are powerful black-box predictors that have achieved impressive performance on a wide variety of tasks.
We propose Neural Additive Models (NAMs) which combine some of the expressivity of DNNs with the inherent intelligibility of generalized additive models.
NAMs learn a linear combination of neural networks that each attend to a single input feature.
arXiv Detail & Related papers (2020-04-29T01:28:32Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.