Learnware: Small Models Do Big
- URL: http://arxiv.org/abs/2210.03647v3
- Date: Mon, 30 Oct 2023 14:20:47 GMT
- Title: Learnware: Small Models Do Big
- Authors: Zhi-Hua Zhou, Zhi-Hao Tan
- Abstract summary: The prevailing big model paradigm, which has achieved impressive results in natural language processing and computer vision applications, has not yet addressed those issues, whereas becoming a serious source of carbon emissions.
This article offers an overview of the learnware paradigm, which attempts to enable users not need to build machine learning models from scratch, with the hope of reusing small models to do things even beyond their original purposes.
- Score: 69.88234743773113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There are complaints about current machine learning techniques such as the
requirement of a huge amount of training data and proficient training skills,
the difficulty of continual learning, the risk of catastrophic forgetting, the
leaking of data privacy/proprietary, etc. Most research efforts have been
focusing on one of those concerned issues separately, paying less attention to
the fact that most issues are entangled in practice. The prevailing big model
paradigm, which has achieved impressive results in natural language processing
and computer vision applications, has not yet addressed those issues, whereas
becoming a serious source of carbon emissions. This article offers an overview
of the learnware paradigm, which attempts to enable users not need to build
machine learning models from scratch, with the hope of reusing small models to
do things even beyond their original purposes, where the key ingredient is the
specification which enables a trained model to be adequately identified to
reuse according to the requirement of future users who know nothing about the
model in advance.
Related papers
- RESTOR: Knowledge Recovery through Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can memorize undesirable datapoints.
Many machine unlearning methods have been proposed that aim to 'erase' these datapoints from trained models.
We propose the RESTOR framework for machine unlearning based on the following dimensions.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - Learning-based Models for Vulnerability Detection: An Extensive Study [3.1317409221921144]
We extensively and comprehensively investigate two types of state-of-the-art learning-based approaches.
We experimentally demonstrate the priority of sequence-based models and the limited abilities of both graph-based models.
arXiv Detail & Related papers (2024-08-14T13:01:30Z) - Beimingwu: A Learnware Dock System [42.54363998206648]
This paper describes Beimingwu, the first open-source learnware dock system providing foundational support for future research of learnware paradigm.
The system significantly streamlines the model development for new user tasks, thanks to its integrated architecture and engine design.
Notably, this is possible even for users with limited data and minimal expertise in machine learning, without compromising the raw data's security.
arXiv Detail & Related papers (2024-01-24T09:27:51Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Synthetic Model Combination: An Instance-wise Approach to Unsupervised
Ensemble Learning [92.89846887298852]
Consider making a prediction over new test data without any opportunity to learn from a training set of labelled data.
Give access to a set of expert models and their predictions alongside some limited information about the dataset used to train them.
arXiv Detail & Related papers (2022-10-11T10:20:31Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - Knowledge Augmented Machine Learning with Applications in Autonomous
Driving: A Survey [37.84106999449108]
This work provides an overview of existing techniques and methods that combine data-driven models with existing knowledge.
The identified approaches are structured according to the categories knowledge integration, extraction and conformity.
In particular, we address the application of the presented methods in the field of autonomous driving.
arXiv Detail & Related papers (2022-05-10T07:25:32Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.