MetaMIML: Meta Multi-Instance Multi-Label Learning
- URL: http://arxiv.org/abs/2111.04112v1
- Date: Sun, 7 Nov 2021 15:54:52 GMT
- Title: MetaMIML: Meta Multi-Instance Multi-Label Learning
- Authors: Yuanlin Yang, Guoxian Yu, Jun Wang, Lei Liu, Carlotta Domeniconi,
Maozu Guo
- Abstract summary: We propose a network embedding and meta learning based approach to mine interdependent MIML objects of different types.
Experiments on benchmark datasets demonstrate that MetaMIML achieves a significantly better performance than state-of-the-art algorithms.
- Score: 27.32606468640938
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Multi-Instance Multi-Label learning (MIML) models complex objects (bags),
each of which is associated with a set of interrelated labels and composed with
a set of instances. Current MIML solutions still focus on a single-type of
objects and assumes an IID distribution of training data. But these objects are
linked with objects of other types, %(i.e., pictures in Facebook link with
various users), which also encode the semantics of target objects. In addition,
they generally need abundant labeled data for training. To effectively mine
interdependent MIML objects of different types, we propose a network embedding
and meta learning based approach (MetaMIML). MetaMIML introduces the context
learner with network embedding to capture semantic information of objects of
different types, and the task learner to extract the meta knowledge for fast
adapting to new tasks. In this way, MetaMIML can naturally deal with MIML
objects at data level improving, but also exploit the power of meta-learning at
the model enhancing. Experiments on benchmark datasets demonstrate that
MetaMIML achieves a significantly better performance than state-of-the-art
algorithms.
Related papers
- ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning [49.447777286862994]
ConML is a universal meta-learning framework that can be applied to various meta-learning algorithms.
We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms.
arXiv Detail & Related papers (2024-10-08T12:22:10Z) - Matching Anything by Segmenting Anything [109.2507425045143]
We propose MASA, a novel method for robust instance association learning.
MASA learns instance-level correspondence through exhaustive data transformations.
We show that MASA achieves even better performance than state-of-the-art methods trained with fully annotated in-domain video sequences.
arXiv Detail & Related papers (2024-06-06T16:20:07Z) - MetaIE: Distilling a Meta Model from LLM for All Kinds of Information Extraction Tasks [40.84745946091173]
We propose a novel framework MetaIE to build a small LM as meta-model by learning to extract "important information"
Specifically, MetaIE obtains the small LM via a symbolic distillation from an LLM following the label-to-span scheme.
We construct the distillation dataset via sampling sentences from language model pre-training datasets.
We evaluate the meta-model under the few-shot adaptation setting.
arXiv Detail & Related papers (2024-03-30T19:43:45Z) - Many or Few Samples? Comparing Transfer, Contrastive and Meta-Learning
in Encrypted Traffic Classification [68.19713459228369]
We compare transfer learning, meta-learning and contrastive learning against reference Machine Learning (ML) tree-based and monolithic DL models.
We show that (i) using large datasets we can obtain more general representations, (ii) contrastive learning is the best methodology.
While ML tree-based cannot handle large tasks but fits well small tasks, by means of reusing learned representations, DL methods are reaching tree-based models performance also for small tasks.
arXiv Detail & Related papers (2023-05-21T11:20:49Z) - Graph based Label Enhancement for Multi-instance Multi-label learning [20.178466198202376]
Multi-instance multi-label (MIML) learning is widely applicated in numerous domains.
This paper proposes a novel MIML framework based on graph label enhancement, namely GLEMIML, to improve the classification performance of MIML.
arXiv Detail & Related papers (2023-04-21T02:24:49Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - A Nested Bi-level Optimization Framework for Robust Few Shot Learning [10.147225934340877]
NestedMAML learns to assign weights to training tasks or instances.
Experiments on synthetic and real-world datasets demonstrate that NestedMAML efficiently mitigates the effects of "unwanted" tasks or instances.
arXiv Detail & Related papers (2020-11-13T06:41:22Z) - MetaMix: Improved Meta-Learning with Interpolation-based Consistency
Regularization [14.531741503372764]
We propose an approach called MetaMix to regularize backbone models.
It generates virtual feature-target pairs within each episode to regularize the backbone models.
It can be integrated with any of the MAML-based algorithms and learn the decision boundaries generalizing better to new tasks.
arXiv Detail & Related papers (2020-09-29T02:44:13Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.