Feature, Alignment, and Supervision in Category Learning: A Comparative Approach with Children and Neural Networks
- URL: http://arxiv.org/abs/2602.03124v1
- Date: Tue, 03 Feb 2026 05:31:06 GMT
- Title: Feature, Alignment, and Supervision in Category Learning: A Comparative Approach with Children and Neural Networks
- Authors: Fanxiao Wani Qiu, Oscar Leong,
- Abstract summary: We compare children and convolutional neural networks (CNNs) in a few-shot semi-supervised category learning task.<n>Children generalize rapidly from minimal labels but show strong feature-specific biases and sensitivity to alignment.<n>CNNs show a different interaction profile: added supervision improves performance, but both alignment and feature structure moderate the impact additional supervision has on learning.
- Score: 4.681760167323748
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding how humans and machines learn from sparse data is central to cognitive science and machine learning. Using a species-fair design, we compare children and convolutional neural networks (CNNs) in a few-shot semi-supervised category learning task. Both learners are exposed to novel object categories under identical conditions. Learners receive mixtures of labeled and unlabeled exemplars while we vary supervision (1/3/6 labels), target feature (size, shape, pattern), and perceptual alignment (high/low). We find that children generalize rapidly from minimal labels but show strong feature-specific biases and sensitivity to alignment. CNNs show a different interaction profile: added supervision improves performance, but both alignment and feature structure moderate the impact additional supervision has on learning. These results show that human-model comparisons must be drawn under the right conditions, emphasizing interactions among supervision, feature structure, and alignment rather than overall accuracy.
Related papers
- A Brain-like Synergistic Core in LLMs Drives Behaviour and Learning [50.68188138112555]
We show that large language models spontaneously develop synergistic cores.<n>We find that areas in middle layers exhibit synergistic processing while early and late layers rely on redundancy.<n>This convergence suggests that synergistic information processing is a fundamental property of intelligence.
arXiv Detail & Related papers (2026-01-11T10:48:35Z) - Comparing supervised learning dynamics: Deep neural networks match human data efficiency but show a generalisation lag [3.0333265803394993]
Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification.
Here we report a detailed investigation of the learning dynamics in human observers and various classic and state-of-the-art DNNs.
Across the whole learning process we evaluate and compare how well learned representations can be generalized to previously unseen test data.
arXiv Detail & Related papers (2024-02-14T16:47:20Z) - Hierarchical Contrastive Learning Enhanced Heterogeneous Graph Neural
Network [59.860534520941485]
Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN)
Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels.
In this paper, we study the problem of self-supervised HGNNs and propose a novel co-contrastive learning mechanism for HGNNs, named HeCo.
arXiv Detail & Related papers (2023-04-24T16:17:21Z) - Is Self-Supervised Learning More Robust Than Supervised Learning? [29.129681691651637]
Self-supervised contrastive learning is a powerful tool to learn visual representation without labels.
We conduct a series of robustness tests to quantify the behavioral differences between contrastive learning and supervised learning.
Under pre-training corruptions, we find contrastive learning vulnerable to patch shuffling and pixel intensity change, yet less sensitive to dataset-level distribution change.
arXiv Detail & Related papers (2022-06-10T17:58:00Z) - Encoding Hierarchical Information in Neural Networks helps in
Subpopulation Shift [8.01009207457926]
Deep neural networks have proven to be adept in image classification tasks, often surpassing humans in terms of accuracy.
In this work, we study the aforementioned problems through the lens of a novel conditional supervised training framework.
We show that learning in this structured hierarchical manner results in networks that are more robust against subpopulation shifts.
arXiv Detail & Related papers (2021-12-20T20:26:26Z) - Symbiotic Adversarial Learning for Attribute-based Person Search [86.7506832053208]
We present a symbiotic adversarial learning framework, called SAL.Two GANs sit at the base of the framework in a symbiotic learning scheme.
Specifically, two different types of generative adversarial networks learn collaboratively throughout the training process.
arXiv Detail & Related papers (2020-07-19T07:24:45Z) - Seeing eye-to-eye? A comparison of object recognition performance in
humans and deep convolutional neural networks under image manipulation [0.0]
This study aims towards a behavioral comparison of visual core object recognition performance between humans and feedforward neural networks.
Analyses of accuracy revealed that humans not only outperform DCNNs on all conditions, but also display significantly greater robustness towards shape and most notably color alterations.
arXiv Detail & Related papers (2020-07-13T10:26:30Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - A Systematic Evaluation: Fine-Grained CNN vs. Traditional CNN
Classifiers [54.996358399108566]
We investigate the performance of the landmark general CNN classifiers, which presented top-notch results on large scale classification datasets.
We compare it against state-of-the-art fine-grained classifiers.
We show an extensive evaluation on six datasets to determine whether the fine-grained classifier is able to elevate the baseline in their experiments.
arXiv Detail & Related papers (2020-03-24T23:49:14Z) - The large learning rate phase of deep learning: the catapult mechanism [50.23041928811575]
We present a class of neural networks with solvable training dynamics.
We find good agreement between our model's predictions and training dynamics in realistic deep learning settings.
We believe our results shed light on characteristics of models trained at different learning rates.
arXiv Detail & Related papers (2020-03-04T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.