"Forgetting" in Machine Learning and Beyond: A Survey
- URL: http://arxiv.org/abs/2405.20620v1
- Date: Fri, 31 May 2024 05:10:30 GMT
- Title: "Forgetting" in Machine Learning and Beyond: A Survey
- Authors: Alyssa Shuang Sha, Bernardo Pereira Nunes, Armin Haller,
- Abstract summary: This survey focuses on the benefits of forgetting and its applications across various machine learning sub-fields.
The paper discusses current challenges, future directions, and ethical considerations regarding the integration of forgetting mechanisms into machine learning models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This survey investigates the multifaceted nature of forgetting in machine learning, drawing insights from neuroscientific research that posits forgetting as an adaptive function rather than a defect, enhancing the learning process and preventing overfitting. This survey focuses on the benefits of forgetting and its applications across various machine learning sub-fields that can help improve model performance and enhance data privacy. Moreover, the paper discusses current challenges, future directions, and ethical considerations regarding the integration of forgetting mechanisms into machine learning models.
Related papers
- Machine Learning Innovations in CPR: A Comprehensive Survey on Enhanced Resuscitation Techniques [52.71395121577439]
This survey paper explores the transformative role of Machine Learning (ML) and Artificial Intelligence (AI) in Cardiopulmonary Resuscitation (CPR)
It highlights the impact of predictive modeling, AI-enhanced devices, and real-time data analysis in improving resuscitation outcomes.
The paper provides a comprehensive overview, classification, and critical analysis of current applications, challenges, and future directions in this emerging field.
arXiv Detail & Related papers (2024-11-03T18:01:50Z) - Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - Application-Driven Innovation in Machine Learning [56.85396167616353]
We describe the paradigm of application-driven research in machine learning.
We show how this approach can productively synergize with methods-driven work.
Despite these benefits, we find that reviewing, hiring, and teaching practices in machine learning often hold back application-driven innovation.
arXiv Detail & Related papers (2024-03-26T04:59:27Z) - Machine Unlearning: Solutions and Challenges [21.141664917477257]
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation.
To address these issues, machine unlearning has emerged as a critical technique to selectively remove specific training data points' influence on trained models.
This paper provides a comprehensive taxonomy and analysis of the solutions in machine unlearning.
arXiv Detail & Related papers (2023-08-14T10:45:51Z) - Machine Unlearning: A Survey [56.79152190680552]
A special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning.
This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality.
No study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios.
The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.
arXiv Detail & Related papers (2023-06-06T10:18:36Z) - Machine Learning with a Reject Option: A survey [18.43771007525432]
This survey aims to provide an overview on machine learning with rejection.
We introduce the conditions leading to two types of rejection, ambiguity and novelty rejection.
We review and categorize strategies to evaluate a model's predictive and rejective quality.
arXiv Detail & Related papers (2021-07-23T14:43:56Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Choice modelling in the age of machine learning -- discussion paper [0.27998963147546135]
Cross-pollination of machine learning models, techniques and practices could help overcome problems and limitations encountered in the current theory-driven paradigm.
Despite the potential benefits of using the advances of machine learning to improve choice modelling practices, the choice modelling field has been hesitant to embrace machine learning.
arXiv Detail & Related papers (2021-01-28T11:57:08Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.