Memory-Associated Differential Learning
- URL: http://arxiv.org/abs/2102.05246v1
- Date: Wed, 10 Feb 2021 03:48:12 GMT
- Title: Memory-Associated Differential Learning
- Authors: Yi Luo, Aiguo Chen, Bei Hui, Ke Yan
- Abstract summary: We propose a novel learning paradigm called Memory-Associated Differential (MAD) Learning.
We first introduce an additional component called Memory to memorize all the training data. Then we learn the differences of labels as well as the associations of features in the combination of a differential equation and some sampling methods.
In the evaluating phase, we predict unknown labels by inferencing from the memorized facts plus the learnt differences and associations in a geometrically meaningful manner.
- Score: 10.332918082271153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional Supervised Learning approaches focus on the mapping from input
features to output labels. After training, the learnt models alone are adapted
onto testing features to predict testing labels in isolation, with training
data wasted and their associations ignored. To take full advantage of the vast
number of training data and their associations, we propose a novel learning
paradigm called Memory-Associated Differential (MAD) Learning. We first
introduce an additional component called Memory to memorize all the training
data. Then we learn the differences of labels as well as the associations of
features in the combination of a differential equation and some sampling
methods. Finally, in the evaluating phase, we predict unknown labels by
inferencing from the memorized facts plus the learnt differences and
associations in a geometrically meaningful manner. We gently build this theory
in unary situations and apply it on Image Recognition, then extend it into Link
Prediction as a binary situation, in which our method outperforms strong
state-of-the-art baselines on three citation networks and ogbl-ddi dataset.
Related papers
- Memory Consistency Guided Divide-and-Conquer Learning for Generalized
Category Discovery [56.172872410834664]
Generalized category discovery (GCD) aims at addressing a more realistic and challenging setting of semi-supervised learning.
We propose a Memory Consistency guided Divide-and-conquer Learning framework (MCDL)
Our method outperforms state-of-the-art models by a large margin on both seen and unseen classes of the generic image recognition.
arXiv Detail & Related papers (2024-01-24T09:39:45Z) - Benchmarking Hebbian learning rules for associative memory [0.0]
Associative memory is a key concept in cognitive and computational brain science.
We benchmark six different learning rules on storage capacity and prototype extraction.
arXiv Detail & Related papers (2023-12-30T21:49:47Z) - Exploring Memorization in Fine-tuned Language Models [53.52403444655213]
We conduct the first comprehensive analysis to explore language models' memorization during fine-tuning across tasks.
Our studies with open-sourced and our own fine-tuned LMs across various tasks indicate that memorization presents a strong disparity among different fine-tuning tasks.
We provide an intuitive explanation of this task disparity via sparse coding theory and unveil a strong correlation between memorization and attention score distribution.
arXiv Detail & Related papers (2023-10-10T15:41:26Z) - Black-box Unsupervised Domain Adaptation with Bi-directional
Atkinson-Shiffrin Memory [59.51934126717572]
Black-box unsupervised domain adaptation (UDA) learns with source predictions of target data without accessing either source data or source models during training.
We propose BiMem, a bi-directional memorization mechanism that learns to remember useful and representative information to correct noisy pseudo labels on the fly.
BiMem achieves superior domain adaptation performance consistently across various visual recognition tasks such as image classification, semantic segmentation and object detection.
arXiv Detail & Related papers (2023-08-25T08:06:48Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Counterfactual Memorization in Neural Language Models [91.8747020391287]
Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive information from their training data.
An open question in previous studies of language model memorization is how to filter out "common" memorization.
We formulate a notion of counterfactual memorization which characterizes how a model's predictions change if a particular document is omitted during training.
arXiv Detail & Related papers (2021-12-24T04:20:57Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - Noisy Concurrent Training for Efficient Learning under Label Noise [13.041607703862724]
Deep neural networks (DNNs) fail to learn effectively under label noise and have been shown to memorize random labels which affect their performance.
We consider learning in isolation, using one-hot encoded labels as the sole source of supervision, and a lack of regularization to discourage memorization as the major shortcomings of the standard training procedure.
We propose Noisy Concurrent Training (NCT) which leverages collaborative learning to use the consensus between two models as an additional source of supervision.
arXiv Detail & Related papers (2020-09-17T14:22:17Z) - Learning to Learn in a Semi-Supervised Fashion [41.38876517851431]
We present a novel meta-learning scheme to address semi-supervised learning from both labeled and unlabeled data.
Our strategy can be viewed as a self-supervised learning scheme, which can be applied to fully supervised learning tasks.
arXiv Detail & Related papers (2020-08-25T17:59:53Z) - Understanding Unintended Memorization in Federated Learning [5.32880378510767]
We show that different components of Federated Learning play an important role in reducing unintended memorization.
We also show that training with a strong user-level differential privacy guarantee results in models that exhibit the least amount of unintended memorization.
arXiv Detail & Related papers (2020-06-12T22:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.