What Machine Learning Tells Us About the Mathematical Structure of Concepts
- URL: http://arxiv.org/abs/2408.15507v1
- Date: Wed, 28 Aug 2024 03:30:22 GMT
- Title: What Machine Learning Tells Us About the Mathematical Structure of Concepts
- Authors: Jun Otsuka,
- Abstract summary: The study highlights how each framework provides a distinct mathematical perspective for modeling concepts.
This work emphasizes the importance of interdisciplinary dialogue, aiming to enrich our understanding of the complex relationship between human cognition and artificial intelligence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper examines the connections among various approaches to understanding concepts in philosophy, cognitive science, and machine learning, with a particular focus on their mathematical nature. By categorizing these approaches into Abstractionism, the Similarity Approach, the Functional Approach, and the Invariance Approach, the study highlights how each framework provides a distinct mathematical perspective for modeling concepts. The synthesis of these approaches bridges philosophical theories and contemporary machine learning models, providing a comprehensive framework for future research. This work emphasizes the importance of interdisciplinary dialogue, aiming to enrich our understanding of the complex relationship between human cognition and artificial intelligence.
Related papers
- Discovering Conceptual Knowledge with Analytic Ontology Templates for Articulated Objects [42.9186628100765]
We aim to endow machine intelligence with an analogous capability through performing at the conceptual level.
AOT-driven approach yields benefits in three key perspectives.
arXiv Detail & Related papers (2024-09-18T04:53:38Z) - A Survey on Compositional Learning of AI Models: Theoretical and Experimetnal Practices [15.92779896185647]
Compositional learning is crucial for human cognition, especially in human language comprehension and visual perception.
Despite its integral role in intelligence, there is a lack of systematic theoretical and experimental research methodologies.
This paper surveys the literature on compositional learning of AI models and the connections made to cognitive studies.
arXiv Detail & Related papers (2024-06-13T03:46:21Z) - Categorical semiotics: Foundations for Knowledge Integration [0.0]
We tackle the challenging task of developing a comprehensive framework for defining and analyzing deep learning architectures.
Our methodology employs graphical structures that resemble Ehresmann's sketches, interpreted within a universe of fuzzy sets.
This approach offers a unified theory that elegantly encompasses both deterministic and non-deterministic neural network designs.
arXiv Detail & Related papers (2024-04-01T23:19:01Z) - Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Rejecting Cognitivism: Computational Phenomenology for Deep Learning [5.070542698701158]
We propose a non-representationalist framework for deep learning relying on a novel method: computational phenomenology.
We reject the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities.
arXiv Detail & Related papers (2023-02-16T20:05:06Z) - Foundations and Recent Trends in Multimodal Machine Learning:
Principles, Challenges, and Open Questions [68.6358773622615]
This paper provides an overview of the computational and theoretical foundations of multimodal machine learning.
We propose a taxonomy of 6 core technical challenges: representation, alignment, reasoning, generation, transference, and quantification.
Recent technical achievements will be presented through the lens of this taxonomy, allowing researchers to understand the similarities and differences across new approaches.
arXiv Detail & Related papers (2022-09-07T19:21:19Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.