Into the Unknown: Self-Learning Large Language Models
- URL: http://arxiv.org/abs/2402.09147v4
- Date: Tue, 12 Nov 2024 03:50:10 GMT
- Title: Into the Unknown: Self-Learning Large Language Models
- Authors: Teddy Ferdinan, Jan Kocoń, Przemysław Kazienko,
- Abstract summary: We introduce a concept called Point in the Unknown (PiU) to identify atomic knowledge unknown to a model.
We develop evaluation metrics to gauge an LLM's self-learning capability.
- Score: 0.0
- License:
- Abstract: We address the main problem of self-learning LLM: the question of what to learn. We propose a self-learning LLM framework that enables an LLM to independently learn previously unknown knowledge through self-assessment of their own hallucinations. We introduce a concept called Point in the Unknown (PiU) to identify atomic knowledge unknown to a model, along with four methods for automatic PiUs identification, facilitating the creation of a self-learning loop that focuses exclusively on the absorption of currently unknown knowledge into the model. Additionally, we developed evaluation metrics to gauge an LLM's self-learning capability. Our experiments revealed that LLMs with at least 3B parameters that have undergone some instruction training would be able to perform self-learning well. We further proved the effectiveness of self-learning by comparing the performance of a model that has undergone self-learning to a model that has not. Our self-learning concept allows more efficient LLM updates and opens new perspectives for LLM knowledge exchange.
Related papers
- Self-Cognition in Large Language Models: An Exploratory Study [77.47074736857726]
This paper performs a pioneering study to explore self-cognition in Large Language Models (LLMs)
We first construct a pool of self-cognition instruction prompts to evaluate where an LLM exhibits self-cognition.
We observe a positive correlation between model size, training data quality, and self-cognition level.
arXiv Detail & Related papers (2024-07-01T17:52:05Z) - LLMs Could Autonomously Learn Without External Supervision [36.36147944680502]
Large Language Models (LLMs) have traditionally been tethered to human-annotated datasets and predefined training objectives.
This paper presents a transformative approach: Autonomous Learning for LLMs.
This method endows LLMs with the ability to self-educate through direct interaction with text, akin to a human reading and comprehending literature.
arXiv Detail & Related papers (2024-06-02T03:36:37Z) - Rethinking Machine Unlearning for Large Language Models [85.92660644100582]
We explore machine unlearning in the domain of large language models (LLMs)
This initiative aims to eliminate undesirable data influence (e.g., sensitive or illegal information) and the associated model capabilities.
arXiv Detail & Related papers (2024-02-13T20:51:58Z) - Democratizing Reasoning Ability: Tailored Learning from Large Language
Model [97.4921006089966]
We propose a tailored learning approach to distill such reasoning ability to smaller LMs.
We exploit the potential of LLM as a reasoning teacher by building an interactive multi-round learning paradigm.
To exploit the reasoning potential of the smaller LM, we propose self-reflection learning to motivate the student to learn from self-made mistakes.
arXiv Detail & Related papers (2023-10-20T07:50:10Z) - SELF: Self-Evolution with Language Feedback [68.6673019284853]
'SELF' (Self-Evolution with Language Feedback) is a novel approach to advance large language models.
It enables LLMs to self-improve through self-reflection, akin to human learning processes.
Our experiments in mathematics and general tasks demonstrate that SELF can enhance the capabilities of LLMs without human intervention.
arXiv Detail & Related papers (2023-10-01T00:52:24Z) - Do Large Language Models Know What They Don't Know? [74.65014158544011]
Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.
Despite their vast knowledge, LLMs are still limited by the amount of information they can accommodate and comprehend.
This study aims to evaluate LLMs' self-knowledge by assessing their ability to identify unanswerable or unknowable questions.
arXiv Detail & Related papers (2023-05-29T15:30:13Z) - Self-directed Machine Learning [86.3709575146414]
In education science, self-directed learning has been shown to be more effective than passive teacher-guided learning.
We introduce the principal concept of Self-directed Machine Learning (SDML) and propose a framework for SDML.
Our proposed SDML process benefits from self task selection, self data selection, self model selection, self optimization strategy selection and self evaluation metric selection.
arXiv Detail & Related papers (2022-01-04T18:32:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.