Knowledge Discovery using Unsupervised Cognition
- URL: http://arxiv.org/abs/2409.20064v1
- Date: Mon, 30 Sep 2024 08:07:29 GMT
- Title: Knowledge Discovery using Unsupervised Cognition
- Authors: Alfredo Ibias, Hector Antona, Guillem Ramirez-Miranda, Enric Guinovart,
- Abstract summary: Unsupervised Cognition is a novel unsupervised learning algorithm that focus on modelling the learned data.
This paper presents three techniques to perform knowledge discovery over an already trained Unsupervised Cognition model.
- Score: 2.6563873893593826
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Knowledge discovery is key to understand and interpret a dataset, as well as to find the underlying relationships between its components. Unsupervised Cognition is a novel unsupervised learning algorithm that focus on modelling the learned data. This paper presents three techniques to perform knowledge discovery over an already trained Unsupervised Cognition model. Specifically, we present a technique for pattern mining, a technique for feature selection based on the previous pattern mining technique, and a technique for dimensionality reduction based on the previous feature selection technique. The final goal is to distinguish between relevant and irrelevant features and use them to build a model from which to extract meaningful patterns. We evaluated our proposals with empirical experiments and found that they overcome the state-of-the-art in knowledge discovery.
Related papers
- Knowledge Graph Enhanced Generative Multi-modal Models for Class-Incremental Learning [51.0864247376786]
We introduce a Knowledge Graph Enhanced Generative Multi-modal model (KG-GMM) that builds an evolving knowledge graph throughout the learning process.
During testing, we propose a Knowledge Graph Augmented Inference method that locates specific categories by analyzing relationships within the generated text.
arXiv Detail & Related papers (2025-03-24T07:20:43Z) - Unsupervised Cognition [2.5069344340760713]
We propose a primitive-based, unsupervised learning approach for decision-making inspired by a novel cognition framework.
This representation-centric approach models the input space constructively as a distributed hierarchical structure in an input-agnostic way.
We show how our proposal outperforms previous state-of-the-art unsupervised learning classification.
arXiv Detail & Related papers (2024-09-27T10:50:49Z) - Challenges with unsupervised LLM knowledge discovery [15.816138136030705]
We show that existing unsupervised methods on large language model (LLM) activations do not discover knowledge.
The idea behind unsupervised knowledge elicitation is that knowledge satisfies a consistency structure, which can be used to discover knowledge.
arXiv Detail & Related papers (2023-12-15T18:49:43Z) - Interpretable Machine Learning for Discovery: Statistical Challenges \&
Opportunities [1.2891210250935146]
We discuss and review the field of interpretable machine learning.
We outline the types of discoveries that can be made using Interpretable Machine Learning.
We focus on the grand challenge of how to validate these discoveries in a data-driven manner.
arXiv Detail & Related papers (2023-08-02T23:57:31Z) - Subspace Distillation for Continual Learning [27.22147868163214]
We propose a knowledge distillation technique that takes into account the manifold structure of a neural network in learning novel tasks.
We demonstrate that the modeling with subspaces provides several intriguing properties, including robustness to noise.
Empirically, we observe that our proposed method outperforms various continual learning methods on several challenging datasets.
arXiv Detail & Related papers (2023-07-31T05:59:09Z) - Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised
Domain Adaptation [70.85686267987744]
Unsupervised domain adaptation problems can transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an interpretive model of unsupervised domain adaptation, as the first attempt to visually unveil the mystery of transferred knowledge.
Our method provides an intuitive explanation for the base model's predictions and unveils transfer knowledge by matching the image patches with the same semantics across both source and target domains.
arXiv Detail & Related papers (2023-03-04T03:02:12Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - A Closer Look at Knowledge Distillation with Features, Logits, and
Gradients [81.39206923719455]
Knowledge distillation (KD) is a substantial strategy for transferring learned knowledge from one neural network model to another.
This work provides a new perspective to motivate a set of knowledge distillation strategies by approximating the classical KL-divergence criteria with different knowledge sources.
Our analysis indicates that logits are generally a more efficient knowledge source and suggests that having sufficient feature dimensions is crucial for the model design.
arXiv Detail & Related papers (2022-03-18T21:26:55Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Exploratory Machine Learning with Unknown Unknowns [60.78953456742171]
We study a new problem setting in which there are unknown classes in the training data misperceived as other labels.
We propose the exploratory machine learning, which examines and investigates training data by actively augmenting the feature space to discover potentially hidden classes.
arXiv Detail & Related papers (2020-02-05T02:06:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.