Subjective Learning for Open-Ended Data
- URL: http://arxiv.org/abs/2108.12113v1
- Date: Fri, 27 Aug 2021 04:18:45 GMT
- Title: Subjective Learning for Open-Ended Data
- Authors: Tianren Zhang, Yizhou Jiang, Xin Su, Shangqi Guo, Feng Chen
- Abstract summary: We present a novel supervised learning paradigm of learning from open-ended data.
Open-ended data inherently requires multiple single-valued deterministic mapping functions.
We show that Open-ended Supervised Learning achieves human-like task cognition without task-level supervision.
- Score: 12.363642151877688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional machine learning methods typically assume that data is split
according to tasks, and the data in each task can be modeled by a single target
function. However, this assumption is invalid in open-ended environments where
no manual task definition is available. In this paper, we present a novel
supervised learning paradigm of learning from open-ended data. Open-ended data
inherently requires multiple single-valued deterministic mapping functions to
capture all its input-output relations, exhibiting an essential structural
difference from conventional supervised data. We formally expound this
structural property with a novel concept termed as mapping rank, and show that
open-ended data poses a fundamental difficulty for conventional supervised
learning, since different data samples may conflict with each other if the
mapping rank of data is larger than one. To address this issue, we devise an
Open-ended Supervised Learning (OSL) framework, of which the key innovation is
a subjective function that automatically allocates the data among multiple
candidate models to resolve the conflict, developing a natural cognition
hierarchy. We demonstrate the efficacy of OSL both theoretically and
empirically, and show that OSL achieves human-like task cognition without
task-level supervision.
Related papers
- An Information Criterion for Controlled Disentanglement of Multimodal Data [39.601584166020274]
Multimodal representation learning seeks to relate and decompose information inherent in multiple modalities.
Disentangled Self-Supervised Learning (DisentangledSSL) is a novel self-supervised approach for learning disentangled representations.
arXiv Detail & Related papers (2024-10-31T14:57:31Z) - Continual Learning for Multimodal Data Fusion of a Soft Gripper [1.0589208420411014]
A model trained on one data modality often fails when tested with a different modality.
We introduce a continual learning algorithm capable of incrementally learning different data modalities.
We evaluate the algorithm's effectiveness on a challenging custom multimodal dataset.
arXiv Detail & Related papers (2024-09-20T09:53:27Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Homological Convolutional Neural Networks [4.615338063719135]
We propose a novel deep learning architecture that exploits the data structural organization through topologically constrained network representations.
We test our model on 18 benchmark datasets against 5 classic machine learning and 3 deep learning models.
arXiv Detail & Related papers (2023-08-26T08:48:51Z) - On the Joint Interaction of Models, Data, and Features [82.60073661644435]
We introduce a new tool, the interaction tensor, for empirically analyzing the interaction between data and model through features.
Based on these observations, we propose a conceptual framework for feature learning.
Under this framework, the expected accuracy for a single hypothesis and agreement for a pair of hypotheses can both be derived in closed-form.
arXiv Detail & Related papers (2023-06-07T21:35:26Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.