FLUID: A Unified Evaluation Framework for Flexible Sequential Data
- URL: http://arxiv.org/abs/2007.02519v6
- Date: Mon, 10 Apr 2023 23:13:56 GMT
- Title: FLUID: A Unified Evaluation Framework for Flexible Sequential Data
- Authors: Matthew Wallingford, Aditya Kusupati, Keivan Alizadeh-Vahid, Aaron
Walsman, Aniruddha Kembhavi, Ali Farhadi
- Abstract summary: We introduce a new unified evaluation framework - FLUID (Flexible Sequential Data)
FLUID integrates the objectives of few-shot, continual, transfer, and representation learning.
We conduct experiments on a broad set of methods which shed new insight on the advantages and limitations of current solutions.
- Score: 42.44973069520298
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern ML methods excel when training data is IID, large-scale, and well
labeled. Learning in less ideal conditions remains an open challenge. The
sub-fields of few-shot, continual, transfer, and representation learning have
made substantial strides in learning under adverse conditions; each affording
distinct advantages through methods and insights. These methods address
different challenges such as data arriving sequentially or scarce training
examples, however often the difficult conditions an ML system will face over
its lifetime cannot be anticipated prior to deployment. Therefore, general ML
systems which can handle the many challenges of learning in practical settings
are needed. To foster research towards the goal of general ML methods, we
introduce a new unified evaluation framework - FLUID (Flexible Sequential
Data). FLUID integrates the objectives of few-shot, continual, transfer, and
representation learning while enabling comparison and integration of techniques
across these subfields. In FLUID, a learner faces a stream of data and must
make sequential predictions while choosing how to update itself, adapt quickly
to novel classes, and deal with changing data distributions; while accounting
for the total amount of compute. We conduct experiments on a broad set of
methods which shed new insight on the advantages and limitations of current
solutions and indicate new research problems to solve. As a starting point
towards more general methods, we present two new baselines which outperform
other evaluated methods on FLUID. Project page:
https://raivn.cs.washington.edu/projects/FLUID/.
Related papers
- Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Conditional Prototype Rectification Prompt Learning [32.533844163120875]
We propose a Prototype Rectification Prompt Learning (CPR) method to correct the bias of base examples and augment limited data in an effective way.
CPR achieves state-of-the-art performance on both few-shot classification and base-to-new generalization tasks.
arXiv Detail & Related papers (2024-04-15T15:43:52Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Lifelong Intent Detection via Multi-Strategy Rebalancing [18.424132535727217]
In this paper, we propose Lifelong Intent Detection (LID), which continually trains an ID model on new data to learn newly emerging intents.
Existing lifelong learning methods usually suffer from a serious imbalance between old and new data in the LID task.
We propose a novel lifelong learning method, Multi-Strategy Rebalancing (MSR), which consists of cosine normalization, hierarchical knowledge distillation, and inter-class margin loss.
arXiv Detail & Related papers (2021-08-10T04:35:13Z) - FedSemi: An Adaptive Federated Semi-Supervised Learning Framework [23.90642104477983]
Federated learning (FL) has emerged as an effective technique to co-training machine learning models without actually sharing data and leaking privacy.
Most existing FL methods focus on the supervised setting and ignore the utilization of unlabeled data.
We propose FedSemi, a novel, adaptive, and general framework, which firstly introduces the consistency regularization into FL using a teacher-student model.
arXiv Detail & Related papers (2020-12-06T15:46:04Z) - Bayesian Meta-Prior Learning Using Empirical Bayes [3.666114237131823]
We propose a hierarchical Empirical Bayes approach that addresses the absence of informative priors, and the inability to control parameter learning rates.
Our method learns empirical meta-priors from the data itself and uses them to decouple the learning rates of first-order and second-order features.
Our findings are promising, as optimizing over sparse data is often a challenge.
arXiv Detail & Related papers (2020-02-04T05:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.