Latent Cognizance: What Machine Really Learns
- URL: http://arxiv.org/abs/2110.15548v1
- Date: Fri, 29 Oct 2021 05:26:38 GMT
- Title: Latent Cognizance: What Machine Really Learns
- Authors: Pisit Nakjai and Jiradej Ponsawat and Tatpong Katanyukul
- Abstract summary: Recent research has discovered Latent Cognizance -- an insight on a recognition mechanism based on a new probabilistic interpretation.
This article investigates the new interpretation under a traceable context.
Our findings support the rationale on which LC is based and reveal a hidden mechanism underlying the learning classification inference.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite overwhelming achievements in recognition accuracy, extending an
open-set capability -- ability to identify when the question is out of scope --
remains greatly challenging in a scalable machine learning inference. A recent
research has discovered Latent Cognizance (LC) -- an insight on a recognition
mechanism based on a new probabilistic interpretation, Bayesian theorem, and an
analysis of an internal structure of a commonly-used recognition inference
structure. The new interpretation emphasizes a latent assumption of an
overlooked probabilistic condition on a learned inference model. Viability of
LC has been shown on a task of sign language recognition, but its potential and
implication can reach far beyond a specific domain and can move object
recognition toward a scalable open-set recognition. However, LC new
probabilistic interpretation has not been directly investigated. This article
investigates the new interpretation under a traceable context. Our findings
support the rationale on which LC is based and reveal a hidden mechanism
underlying the learning classification inference. The ramification of these
findings could lead to a simple yet effective solution to an open-set
recognition.
Related papers
- Identifiable Exchangeable Mechanisms for Causal Structure and Representation Learning [54.69189620971405]
We provide a unified framework, termed Identifiable Exchangeable Mechanisms (IEM), for representation and structure learning.
IEM provides new insights that let us relax the necessary conditions for causal structure identification in exchangeable non-i.i.d. data.
We also demonstrate the existence of a duality condition in identifiable representation learning, leading to new identifiability results.
arXiv Detail & Related papers (2024-06-20T13:30:25Z) - Translating Expert Intuition into Quantifiable Features: Encode Investigator Domain Knowledge via LLM for Enhanced Predictive Analytics [2.330270848695646]
This paper explores the potential of Large Language Models to bridge the gap by systematically converting investigator-derived insights into quantifiable, actionable features.
We present a framework that leverages LLMs' natural language understanding capabilities to encode these red flags into a structured feature set that can be readily integrated into existing predictive models.
The results indicate significant improvements in risk assessment and decision-making accuracy, highlighting the value of blending human experiential knowledge with advanced machine learning techniques.
arXiv Detail & Related papers (2024-05-11T13:23:43Z) - Know Yourself Better: Diverse Discriminative Feature Learning Improves Open Set Recognition [1.386950208583845]
We conduct an analysis of open set recognition methods, focusing on the aspect of feature diversity.
Our research reveals a significant correlation between learning diverse discriminative features and enhancing OSR performance.
We propose a novel OSR approach that leverages the advantages of feature diversity.
arXiv Detail & Related papers (2024-04-16T08:08:47Z) - Is CLIP the main roadblock for fine-grained open-world perception? [7.190567053576658]
Recent studies highlighted limitations on the fine-grained recognition capabilities in open-vocabulary settings.
We show that the lack of fine-grained understanding is caused by the poor separability of object characteristics in the CLIP latent space.
Our experiments show that simple CLIP latent-space re-projections help separate fine-grained concepts.
arXiv Detail & Related papers (2024-04-04T15:47:30Z) - Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs [58.620269228776294]
We propose a task-agnostic framework for resolving ambiguity by asking users clarifying questions.
We evaluate systems across three NLP applications: question answering, machine translation and natural language inference.
We find that intent-sim is robust, demonstrating improvements across a wide range of NLP tasks and LMs.
arXiv Detail & Related papers (2023-11-16T00:18:50Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - The Familiarity Hypothesis: Explaining the Behavior of Deep Open Set
Methods [86.39044549664189]
Anomaly detection algorithms for feature-vector data identify anomalies as outliers, but outlier detection has not worked well in deep learning.
This paper proposes the Familiarity Hypothesis that these methods succeed because they are detecting the absence of familiar learned features rather than the presence of novelty.
The paper concludes with a discussion of whether familiarity detection is an inevitable consequence of representation learning.
arXiv Detail & Related papers (2022-03-04T18:32:58Z) - Properties from Mechanisms: An Equivariance Perspective on Identifiable
Representation Learning [79.4957965474334]
Key goal of unsupervised representation learning is "inverting" a data generating process to recover its latent properties.
This paper asks, "Can we instead identify latent properties by leveraging knowledge of the mechanisms that govern their evolution?"
We provide a complete characterization of the sources of non-identifiability as we vary knowledge about a set of possible mechanisms.
arXiv Detail & Related papers (2021-10-29T14:04:08Z) - Recognition Awareness: An Application of Latent Cognizance to Open-Set
Recognition [0.0]
Softmax mechanism forces a model to predict an object class out of a set of pre-defined labels.
This characteristic contributes to efficacy in classification, but poses a risk of non-sense prediction in object recognition.
Open-Set Recognition is intended to address an issue of identifying a foreign object in object recognition.
arXiv Detail & Related papers (2021-08-27T04:41:41Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z) - Towards falsifiable interpretability research [7.360807642941714]
We argue that interpretability research suffers from an over-reliance on intuition-based approaches.
We examine two popular classes of interpretability methods-saliency and single-neuron-based approaches.
We propose a strategy to address these impediments in the form of a framework for strongly falsifiable interpretability research.
arXiv Detail & Related papers (2020-10-22T22:03:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.