Bias Loss for Mobile Neural Networks
- URL: http://arxiv.org/abs/2107.11170v2
- Date: Mon, 26 Jul 2021 14:41:21 GMT
- Title: Bias Loss for Mobile Neural Networks
- Authors: Lusine Abrahamyan, Valentin Ziatchin, Yiming Chen and Nikos
Deligiannis
- Abstract summary: Compact convolutional neural networks (CNNs) have witnessed exceptional improvements in performance in recent years.
The diverse and even abundant features captured by the layers is an important characteristic of these successful CNNs.
In compact CNNs, due to the limited number of parameters, abundant features are unlikely to be obtained, and feature diversity becomes an essential characteristic.
- Score: 14.08237958134373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compact convolutional neural networks (CNNs) have witnessed exceptional
improvements in performance in recent years. However, they still fail to
provide the same predictive power as CNNs with a large number of parameters.
The diverse and even abundant features captured by the layers is an important
characteristic of these successful CNNs. However, differences in this
characteristic between large CNNs and their compact counterparts have rarely
been investigated. In compact CNNs, due to the limited number of parameters,
abundant features are unlikely to be obtained, and feature diversity becomes an
essential characteristic. Diverse features present in the activation maps
derived from a data point during model inference may indicate the presence of a
set of unique descriptors necessary to distinguish between objects of different
classes. In contrast, data points with low feature diversity may not provide a
sufficient amount of unique descriptors to make a valid prediction; we refer to
them as random predictions. Random predictions can negatively impact the
optimization process and harm the final performance. This paper proposes
addressing the problem raised by random predictions by reshaping the standard
cross-entropy to make it biased toward data points with a limited number of
unique descriptive features. Our novel Bias Loss focuses the training on a set
of valuable data points and prevents the vast number of samples with poor
learning features from misleading the optimization process. Furthermore, to
show the importance of diversity, we present a family of SkipNet models whose
architectures are brought to boost the number of unique descriptors in the last
layers. Our Skipnet-M can achieve 1% higher classification accuracy than
MobileNetV3 Large.
Related papers
- Scale-Invariant Object Detection by Adaptive Convolution with Unified Global-Local Context [3.061662434597098]
We propose an object detection model using a Switchable (adaptive) Atrous Convolutional Network (SAC-Net) based on the efficientDet model.
The proposed SAC-Net encapsulates the benefits of both low-level and high-level features to achieve improved performance on multi-scale object detection tasks.
Our experiments on benchmark datasets demonstrate that the proposed SAC-Net outperforms the state-of-the-art models by a significant margin in terms of accuracy.
arXiv Detail & Related papers (2024-09-17T10:08:37Z) - Debiasing Multimodal Models via Causal Information Minimization [65.23982806840182]
We study bias arising from confounders in a causal graph for multimodal data.
Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data.
We use these features as confounder representations and use them via methods motivated by causal theory to remove bias from models.
arXiv Detail & Related papers (2023-11-28T16:46:14Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Interpreting Bias in the Neural Networks: A Peek Into Representational
Similarity [0.0]
We investigate the performance and internal representational structure of convolution-based neural networks trained on biased data.
We specifically study similarities in representations, using Centered Kernel Alignment (CKA) for different objective functions.
We note that without progressive representational similarities among the layers of a neural network, the performance is less likely to be robust.
arXiv Detail & Related papers (2022-11-14T22:17:14Z) - Exact Feature Collisions in Neural Networks [2.1069219768528473]
Recent research suggests that the same networks can be extremely insensitive to changes of large magnitude.
In such cases, features of two data points are said to approximately collide, thus leading to the largely similar predictions.
We propose the Null-space search, a numerical approach that does not rely on collides, to create data points with colliding features for any input.
arXiv Detail & Related papers (2022-05-31T13:01:00Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z) - PushNet: Efficient and Adaptive Neural Message Passing [1.9121961872220468]
Message passing neural networks have recently evolved into a state-of-the-art approach to representation learning on graphs.
Existing methods perform synchronous message passing along all edges in multiple subsequent rounds.
We consider a novel asynchronous message passing approach where information is pushed only along the most relevant edges until convergence.
arXiv Detail & Related papers (2020-03-04T18:15:30Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.