Compensation Learning
- URL: http://arxiv.org/abs/2107.11921v1
- Date: Mon, 26 Jul 2021 01:41:25 GMT
- Title: Compensation Learning
- Authors: Rujing Yao and Mengyang Li and Ou Wu
- Abstract summary: This study reveals another undiscovered strategy, namely, compensating, that has also been widely used in machine learning.
Three concrete new learning algorithms are proposed for robust machine learning.
- Score: 2.3526458707956643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Weighting strategy prevails in machine learning. For example, a common
approach in robust machine learning is to exert lower weights on samples which
are likely to be noisy or hard. This study reveals another undiscovered
strategy, namely, compensating, that has also been widely used in machine
learning. Learning with compensating is called compensation learning and a
systematic taxonomy is constructed for it in this study. In our taxonomy,
compensation learning is divided on the basis of the compensation targets,
inference manners, and granularity levels. Many existing learning algorithms
including some classical ones can be seen as a special case of compensation
learning or partially leveraging compensating. Furthermore, a family of new
learning algorithms can be obtained by plugging the compensation learning into
existing learning algorithms. Specifically, three concrete new learning
algorithms are proposed for robust machine learning. Extensive experiments on
text sentiment analysis, image classification, and graph classification verify
the effectiveness of the three new algorithms. Compensation learning can also
be used in various learning scenarios, such as imbalance learning, clustering,
regression, and so on.
Related papers
- RESTOR: Knowledge Recovery through Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can memorize undesirable datapoints.
Many machine unlearning methods have been proposed that aim to 'erase' these datapoints from trained models.
We propose the RESTOR framework for machine unlearning based on the following dimensions.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - A Unified Generalization Analysis of Re-Weighting and Logit-Adjustment
for Imbalanced Learning [129.63326990812234]
We propose a technique named data-dependent contraction to capture how modified losses handle different classes.
On top of this technique, a fine-grained generalization bound is established for imbalanced learning, which helps reveal the mystery of re-weighting and logit-adjustment.
arXiv Detail & Related papers (2023-10-07T09:15:08Z) - Ticketed Learning-Unlearning Schemes [57.89421552780526]
We propose a new ticketed model for learning--unlearning.
We provide space-efficient ticketed learning--unlearning schemes for a broad family of concept classes.
arXiv Detail & Related papers (2023-06-27T18:54:40Z) - Impact Learning: A Learning Method from Features Impact and Competition [1.3569491184708429]
This paper introduced a new machine learning algorithm called impact learning.
Impact learning is a supervised learning algorithm that can be consolidated in both classification and regression problems.
It is prepared by the impacts of the highlights from the intrinsic rate of natural increase.
arXiv Detail & Related papers (2022-11-04T04:56:35Z) - Tree-Based Adaptive Model Learning [62.997667081978825]
We extend the Kearns-Vazirani learning algorithm to handle systems that change over time.
We present a new learning algorithm that can reuse and update previously learned behavior, implement it in the LearnLib library, and evaluate it on large examples.
arXiv Detail & Related papers (2022-08-31T21:24:22Z) - Learning by Examples Based on Multi-level Optimization [12.317568257671427]
We propose a novel learning approach called Learning By Examples (LBE)
Our approach automatically retrieves a set of training examples that are similar to query examples and predicts labels for query examples by using class labels of the retrieved examples.
We conduct extensive experiments on various benchmarks where the results demonstrate the effectiveness of our method on both supervised and few-shot learning.
arXiv Detail & Related papers (2021-09-22T16:33:06Z) - Curriculum Learning: A Survey [65.31516318260759]
Curriculum learning strategies have been successfully employed in all areas of machine learning.
We construct a taxonomy of curriculum learning approaches by hand, considering various classification criteria.
We build a hierarchical tree of curriculum learning methods using an agglomerative clustering algorithm.
arXiv Detail & Related papers (2021-01-25T20:08:32Z) - Learning by Ignoring, with Application to Domain Adaptation [10.426533624387305]
We propose a novel machine learning framework referred to as learning by ignoring (LBI)
Our framework automatically identifies pretraining data examples that have large domain shift from the target distribution by learning an ignoring variable for each example and excludes them from the pretraining process.
A gradient-based algorithm is developed to efficiently solve the three-level optimization problem in LBI.
arXiv Detail & Related papers (2020-12-28T15:33:41Z) - A Theory of Universal Learning [26.51949485387526]
We show that there are only three possible rates of universal learning.
We show that the learning curves of any given concept class decay either at an exponential, or arbitrarily slow rates.
arXiv Detail & Related papers (2020-11-09T15:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.