Deep Model Compression Also Helps Models Capture Ambiguity
- URL: http://arxiv.org/abs/2306.07061v1
- Date: Mon, 12 Jun 2023 12:24:47 GMT
- Title: Deep Model Compression Also Helps Models Capture Ambiguity
- Authors: Hancheol Park, Jong C. Park
- Abstract summary: Natural language understanding (NLU) tasks face a non-trivial amount of ambiguous samples.
NLU models should account for such ambiguity, but they approximate the human opinion distributions quite poorly.
We propose a novel method with deep model compression and show how such relationship can be accounted for.
- Score: 0.34265828682659694
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural language understanding (NLU) tasks face a non-trivial amount of
ambiguous samples where veracity of their labels is debatable among annotators.
NLU models should thus account for such ambiguity, but they approximate the
human opinion distributions quite poorly and tend to produce over-confident
predictions. To address this problem, we must consider how to exactly capture
the degree of relationship between each sample and its candidate classes. In
this work, we propose a novel method with deep model compression and show how
such relationship can be accounted for. We see that more reasonably represented
relationships can be discovered in the lower layers and that validation
accuracies are converging at these layers, which naturally leads to layer
pruning. We also see that distilling the relationship knowledge from a lower
layer helps models produce better distribution. Experimental results
demonstrate that our method makes substantial improvement on quantifying
ambiguity without gold distribution labels. As positive side-effects, our
method is found to reduce the model size significantly and improve latency,
both attractive aspects of NLU products.
Related papers
- How Alignment Shrinks the Generative Horizon [20.243063721305116]
Branching Factor (BF) is a token-invariant measure of the effective number of plausible next steps during generation.<n> alignment tuning substantially sharpens the model's output distribution from the outset.<n>Building on this insight, we find this stability has surprising implications for complex reasoning.
arXiv Detail & Related papers (2025-06-22T02:00:37Z) - Leveraging Text-to-Image Generation for Handling Spurious Correlation [24.940576844328408]
Deep neural networks trained with Empirical Risk Minimization (ERM) perform well when both training and test data come from the same domain.
ERM models may rely on spurious correlations that often exist between labels and irrelevant features of images, making predictions unreliable when those features do not exist.
We propose a technique to generate training samples with text-to-image (T2I) diffusion models for addressing the spurious correlation problem.
arXiv Detail & Related papers (2025-03-21T15:28:22Z) - Identifiable Latent Neural Causal Models [82.14087963690561]
Causal representation learning seeks to uncover latent, high-level causal representations from low-level observed data.
We determine the types of distribution shifts that do contribute to the identifiability of causal representations.
We translate our findings into a practical algorithm, allowing for the acquisition of reliable latent causal representations.
arXiv Detail & Related papers (2024-03-23T04:13:55Z) - Can we Agree? On the Rash\=omon Effect and the Reliability of Post-Hoc
Explainable AI [0.0]
The Rash=omon effect poses challenges for deriving reliable knowledge from machine learning models.
This study examined the influence of sample size on explanations from models in a Rash=omon set using SHAP.
arXiv Detail & Related papers (2023-08-14T16:32:24Z) - Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures [93.17009514112702]
Pruning, setting a significant subset of the parameters of a neural network to zero, is one of the most popular methods of model compression.
Despite existing evidence for this phenomenon, the relationship between neural network pruning and induced bias is not well-understood.
arXiv Detail & Related papers (2023-04-25T07:42:06Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious
Correlations from a Feature Perspective [47.10907370311025]
Natural language understanding (NLU) models tend to rely on spurious correlations (emphi.e., dataset bias) to achieve high performance on in-distribution datasets but poor performance on out-of-distribution ones.
Most of the existing debiasing methods often identify and weaken these samples with biased features.
Down-weighting these samples obstructs the model in learning from the non-biased parts of these samples.
We propose to eliminate spurious correlations in a fine-grained manner from a feature space perspective.
arXiv Detail & Related papers (2022-02-16T13:23:14Z) - Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated
Label Mixing [104.630875328668]
Mixup scheme suggests mixing a pair of samples to create an augmented training sample.
We present a novel, yet simple Mixup-variant that captures the best of both worlds.
arXiv Detail & Related papers (2021-12-16T11:27:48Z) - Exploiting Transductive Property of Graph Convolutional Neural Networks
with Less Labeling Effort [0.0]
The developing GCN model has made significant experimental contributions with Convolution filters applied to graph data.
Due to its transductive property, all of the data samples, which is partially labeled, are given as input to the model.
arXiv Detail & Related papers (2021-05-01T05:33:31Z) - Contextual Dropout: An Efficient Sample-Dependent Dropout Module [60.63525456640462]
Dropout has been demonstrated as a simple and effective module to regularize the training process of deep neural networks.
We propose contextual dropout with an efficient structural design as a simple and scalable sample-dependent dropout module.
Our experimental results show that the proposed method outperforms baseline methods in terms of both accuracy and quality of uncertainty estimation.
arXiv Detail & Related papers (2021-03-06T19:30:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.