Early Classifying Multimodal Sequences
- URL: http://arxiv.org/abs/2305.01151v1
- Date: Tue, 2 May 2023 01:57:34 GMT
- Title: Early Classifying Multimodal Sequences
- Authors: Alexander Cao, Jean Utke and Diego Klabjan
- Abstract summary: Trading wait time for decision certainty leads to early classification problems.
We show our new method yields experimental AUC advantages of up to 8.7%.
- Score: 86.80932013694684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Often pieces of information are received sequentially over time. When did one
collect enough such pieces to classify? Trading wait time for decision
certainty leads to early classification problems that have recently gained
attention as a means of adapting classification to more dynamic environments.
However, so far results have been limited to unimodal sequences. In this pilot
study, we expand into early classifying multimodal sequences by combining
existing methods. We show our new method yields experimental AUC advantages of
up to 8.7%.
Related papers
- HIERVAR: A Hierarchical Feature Selection Method for Time Series Analysis [22.285570102169356]
Time series classification stands as a pivotal and intricate challenge across various domains.
We propose a novel hierarchical feature selection method aided by ANOVA variance analysis.
Our method substantially reduces features by over 94% while preserving accuracy.
arXiv Detail & Related papers (2024-07-22T20:55:13Z) - FeCAM: Exploiting the Heterogeneity of Class Distributions in
Exemplar-Free Continual Learning [21.088762527081883]
Exemplar-free class-incremental learning (CIL) poses several challenges since it prohibits the rehearsal of data from previous tasks.
Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention.
We explore prototypical networks for CIL, which generate new class prototypes using the frozen feature extractor and classify the features based on the Euclidean distance to the prototypes.
arXiv Detail & Related papers (2023-09-25T11:54:33Z) - Teamwork Is Not Always Good: An Empirical Study of Classifier Drift in
Class-incremental Information Extraction [12.4259256312658]
Class-incremental learning aims to develop a learning system that can continually learn new classes from a data stream without forgetting previously learned classes.
In this paper, we take a closer look at how the drift in the classifier leads to forgetting, and accordingly, four simple yet (super-) effective solutions to alleviate the drift.
Our solutions consistently show significant improvement over the previous state-of-the-art approaches with up to 44.7% absolute F-score gain.
arXiv Detail & Related papers (2023-05-26T00:57:43Z) - Few-shot Classification via Ensemble Learning with Multi-Order
Statistics [9.145742362513932]
We show that leveraging ensemble learning on the base classes can correspondingly reduce the true error in the novel classes.
A novel method named Ensemble Learning with Multi-Order Statistics (ELMOS) is proposed in this paper.
We show that our method can produce a state-of-the-art performance on multiple few-shot classification benchmark datasets.
arXiv Detail & Related papers (2023-04-30T11:41:01Z) - A Policy for Early Sequence Classification [86.80932013694684]
We introduce a novel method to classify a sequence as soon as possible without waiting for the last element.
Our method achieves an average AUC increase of 11.8% over multiple experiments.
arXiv Detail & Related papers (2023-04-07T03:38:54Z) - Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery [76.63807209414789]
We challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly.
We propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios.
arXiv Detail & Related papers (2023-03-28T13:47:16Z) - An Investigation of Replay-based Approaches for Continual Learning [79.0660895390689]
Continual learning (CL) is a major challenge of machine learning (ML) and describes the ability to learn several tasks sequentially without catastrophic forgetting (CF)
Several solution classes have been proposed, of which so-called replay-based approaches seem very promising due to their simplicity and robustness.
We empirically investigate replay-based approaches of continual learning and assess their potential for applications.
arXiv Detail & Related papers (2021-08-15T15:05:02Z) - MCDAL: Maximum Classifier Discrepancy for Active Learning [74.73133545019877]
Recent state-of-the-art active learning methods have mostly leveraged Generative Adversarial Networks (GAN) for sample acquisition.
We propose in this paper a novel active learning framework that we call Maximum Discrepancy for Active Learning (MCDAL)
In particular, we utilize two auxiliary classification layers that learn tighter decision boundaries by maximizing the discrepancies among them.
arXiv Detail & Related papers (2021-07-23T06:57:08Z) - Revisiting Deep Local Descriptor for Improved Few-Shot Classification [56.74552164206737]
We show how one can improve the quality of embeddings by leveraging textbfDense textbfClassification and textbfAttentive textbfPooling.
We suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling (GAP) to prepare embeddings for few-shot classification.
arXiv Detail & Related papers (2021-03-30T00:48:28Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.