Uncertainty-guided Boundary Learning for Imbalanced Social Event
Detection
- URL: http://arxiv.org/abs/2310.19247v1
- Date: Mon, 30 Oct 2023 03:32:04 GMT
- Title: Uncertainty-guided Boundary Learning for Imbalanced Social Event
Detection
- Authors: Jiaqian Ren and Hao Peng and Lei Jiang and Zhiwei Liu and Jia Wu and
Zhengtao Yu and Philip S. Yu
- Abstract summary: We propose a novel uncertainty-guided class imbalance learning framework for imbalanced social event detection tasks.
Our model significantly improves social event representation and classification tasks in almost all classes, especially those uncertain ones.
- Score: 64.4350027428928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-world social events typically exhibit a severe class-imbalance
distribution, which makes the trained detection model encounter a serious
generalization challenge. Most studies solve this problem from the frequency
perspective and emphasize the representation or classifier learning for tail
classes. While in our observation, compared to the rarity of classes, the
calibrated uncertainty estimated from well-trained evidential deep learning
networks better reflects model performance. To this end, we propose a novel
uncertainty-guided class imbalance learning framework - UCL$_{SED}$, and its
variant - UCL-EC$_{SED}$, for imbalanced social event detection tasks. We aim
to improve the overall model performance by enhancing model generalization to
those uncertain classes. Considering performance degradation usually comes from
misclassifying samples as their confusing neighboring classes, we focus on
boundary learning in latent space and classifier learning with high-quality
uncertainty estimation. First, we design a novel uncertainty-guided contrastive
learning loss, namely UCL and its variant - UCL-EC, to manipulate
distinguishable representation distribution for imbalanced data. During
training, they force all classes, especially uncertain ones, to adaptively
adjust a clear separable boundary in the feature space. Second, to obtain more
robust and accurate class uncertainty, we combine the results of multi-view
evidential classifiers via the Dempster-Shafer theory under the supervision of
an additional calibration method. We conduct experiments on three severely
imbalanced social event datasets including Events2012\_100, Events2018\_100,
and CrisisLexT\_7. Our model significantly improves social event representation
and classification tasks in almost all classes, especially those uncertain
ones.
Related papers
- Covariance-corrected Whitening Alleviates Network Degeneration on Imbalanced Classification [6.197116272789107]
Class imbalance is a critical issue in image classification that significantly affects the performance of deep recognition models.
We propose a novel framework called Whitening-Net to mitigate the degenerate solutions.
In scenarios with extreme class imbalance, the batch covariance statistic exhibits significant fluctuations, impeding the convergence of the whitening operation.
arXiv Detail & Related papers (2024-08-30T10:49:33Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Gradient Reweighting: Towards Imbalanced Class-Incremental Learning [8.438092346233054]
Class-Incremental Learning (CIL) trains a model to continually recognize new classes from non-stationary data.
A major challenge of CIL arises when applying to real-world data characterized by non-uniform distribution.
We show that this dual imbalance issue causes skewed gradient updates with biased weights in FC layers, thus inducing over/under-fitting and catastrophic forgetting in CIL.
arXiv Detail & Related papers (2024-02-28T18:08:03Z) - Class Uncertainty: A Measure to Mitigate Class Imbalance [0.0]
We show that considering solely the cardinality of classes does not cover all issues causing class imbalance.
We propose "Class Uncertainty" as the average predictive uncertainty of the training examples.
We also curate SVCI-20 as a novel dataset in which the classes have equal number of training examples but they differ in terms of their hardness.
arXiv Detail & Related papers (2023-11-23T16:36:03Z) - Targeted Supervised Contrastive Learning for Long-Tailed Recognition [50.24044608432207]
Real-world data often exhibits long tail distributions with heavy class imbalance.
We show that while supervised contrastive learning can help improve performance, past baselines suffer from poor uniformity brought in by imbalanced data distribution.
We propose targeted supervised contrastive learning (TSC), which improves the uniformity of the feature distribution on the hypersphere.
arXiv Detail & Related papers (2021-11-27T22:40:10Z) - Analyzing Overfitting under Class Imbalance in Neural Networks for Image
Segmentation [19.259574003403998]
In image segmentation neural networks may overfit to the foreground samples from small structures.
In this study, we provide new insights on the problem of overfitting under class imbalance by inspecting the network behavior.
arXiv Detail & Related papers (2021-02-20T14:57:58Z) - Entropy-Based Uncertainty Calibration for Generalized Zero-Shot Learning [49.04790688256481]
The goal of generalized zero-shot learning (GZSL) is to recognise both seen and unseen classes.
Most GZSL methods typically learn to synthesise visual representations from semantic information on the unseen classes.
We propose a novel framework that leverages dual variational autoencoders with a triplet loss to learn discriminative latent features.
arXiv Detail & Related papers (2021-01-09T05:21:27Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.