Impact Learning: A Learning Method from Features Impact and Competition
- URL: http://arxiv.org/abs/2211.02263v1
- Date: Fri, 4 Nov 2022 04:56:35 GMT
- Title: Impact Learning: A Learning Method from Features Impact and Competition
- Authors: Nusrat Jahan Prottasha, Saydul Akbar Murad, Abu Jafar Md Muzahid,
Masud Rana, Md Kowsher, Apurba Adhikary, Sujit Biswas, Anupam Kumar Bairagi
- Abstract summary: This paper introduced a new machine learning algorithm called impact learning.
Impact learning is a supervised learning algorithm that can be consolidated in both classification and regression problems.
It is prepared by the impacts of the highlights from the intrinsic rate of natural increase.
- Score: 1.3569491184708429
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning is the study of computer algorithms that can automatically
improve based on data and experience. Machine learning algorithms build a model
from sample data, called training data, to make predictions or judgments
without being explicitly programmed to do so. A variety of wellknown machine
learning algorithms have been developed for use in the field of computer
science to analyze data. This paper introduced a new machine learning algorithm
called impact learning. Impact learning is a supervised learning algorithm that
can be consolidated in both classification and regression problems. It can
furthermore manifest its superiority in analyzing competitive data. This
algorithm is remarkable for learning from the competitive situation and the
competition comes from the effects of autonomous features. It is prepared by
the impacts of the highlights from the intrinsic rate of natural increase
(RNI). We, moreover, manifest the prevalence of the impact learning over the
conventional machine learning algorithm.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Compensation Learning [2.3526458707956643]
This study reveals another undiscovered strategy, namely, compensating, that has also been widely used in machine learning.
Three concrete new learning algorithms are proposed for robust machine learning.
arXiv Detail & Related papers (2021-07-26T01:41:25Z) - Learnability of Learning Performance and Its Application to Data
Valuation [11.78594243870616]
In most machine learning (ML) tasks, evaluating learning performance on a given dataset requires intensive computation.
The ability to efficiently estimate learning performance may benefit a wide spectrum of applications, such as active learning, data quality management, and data valuation.
Recent empirical studies show that for many common ML models, one can accurately learn a parametric model that predicts learning performance for any given input datasets using a small amount of samples.
arXiv Detail & Related papers (2021-07-13T18:56:04Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Intuitiveness in Active Teaching [7.8029610421817654]
We analyze intuitiveness of certain algorithms when they are actively taught by users.
We offer a systematic method to judge the efficacy of human-machine interactions.
arXiv Detail & Related papers (2020-12-25T09:31:56Z) - Ethical behavior in humans and machines -- Evaluating training data
quality for beneficial machine learning [0.0]
This study describes new dimensions of data quality for supervised machine learning applications.
The specific objective of this study is to describe how training data can be selected according to ethical assessments of the behavior it originates from.
arXiv Detail & Related papers (2020-08-26T09:48:38Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Performance Analysis and Comparison of Machine and Deep Learning
Algorithms for IoT Data Classification [0.0]
This paper evaluates the performance of 11 popular machine and deep learning algorithms for classification task using six IoT-related datasets.
Considering all performance metrics, Random Forests performed better than other machine learning models, while among deep learning models, ANN and CNN achieved more interesting results.
arXiv Detail & Related papers (2020-01-27T09:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.