Active Learning of Ordinal Embeddings: A User Study on Football Data
- URL: http://arxiv.org/abs/2207.12710v1
- Date: Tue, 26 Jul 2022 07:55:23 GMT
- Title: Active Learning of Ordinal Embeddings: A User Study on Football Data
- Authors: Christoffer Loeffler, Kion Fallah, Stefano Fenu, Dario Zanca, Bjoern
Eskofier, Christopher John Rozell, Christopher Mutschler
- Abstract summary: Humans innately measure distance between instances in an unlabeled dataset using an unknown similarity function.
This work uses deep metric learning to learn these user-defined similarity functions from few annotations for a large football trajectory dataset.
- Score: 4.856635699699126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans innately measure distance between instances in an unlabeled dataset
using an unknown similarity function. Distance metrics can only serve as proxy
for similarity in information retrieval of similar instances. Learning a good
similarity function from human annotations improves the quality of retrievals.
This work uses deep metric learning to learn these user-defined similarity
functions from few annotations for a large football trajectory dataset. We
adapt an entropy-based active learning method with recent work from triplet
mining to collect easy-to-answer but still informative annotations from human
participants and use them to train a deep convolutional network that
generalizes to unseen samples. Our user study shows that our approach improves
the quality of the information retrieval compared to a previous deep metric
learning approach that relies on a Siamese network. Specifically, we shed light
on the strengths and weaknesses of passive sampling heuristics and active
learners alike by analyzing the participants' response efficacy. To this end,
we collect accuracy, algorithmic time complexity, the participants' fatigue and
time-to-response, qualitative self-assessment and statements, as well as the
effects of mixed-expertise annotators and their consistency on model
performance and transfer-learning.
Related papers
- Reinforcement Learning from Passive Data via Latent Intentions [86.4969514480008]
We show that passive data can still be used to learn features that accelerate downstream RL.
Our approach learns from passive data by modeling intentions.
Our experiments demonstrate the ability to learn from many forms of passive data, including cross-embodiment video data and YouTube videos.
arXiv Detail & Related papers (2023-04-10T17:59:05Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Personalized Decentralized Multi-Task Learning Over Dynamic
Communication Graphs [59.96266198512243]
We propose a decentralized and federated learning algorithm for tasks that are positively and negatively correlated.
Our algorithm uses gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other.
We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset.
arXiv Detail & Related papers (2022-12-21T18:58:24Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Combining Feature and Instance Attribution to Detect Artifacts [62.63504976810927]
We propose methods to facilitate identification of training data artifacts.
We show that this proposed training-feature attribution approach can be used to uncover artifacts in training data.
We execute a small user study to evaluate whether these methods are useful to NLP researchers in practice.
arXiv Detail & Related papers (2021-07-01T09:26:13Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Recognizing More Emotions with Less Data Using Self-supervised Transfer
Learning [0.0]
We propose a novel transfer learning method for speech emotion recognition.
With as low as 125 examples per emotion class, we were able to reach a higher accuracy than a strong baseline trained on 8 times more data.
arXiv Detail & Related papers (2020-11-11T06:18:31Z) - Human Trajectory Forecasting in Crowds: A Deep Learning Perspective [89.4600982169]
We present an in-depth analysis of existing deep learning-based methods for modelling social interactions.
We propose two knowledge-based data-driven methods to effectively capture these social interactions.
We develop a large scale interaction-centric benchmark TrajNet++, a significant yet missing component in the field of human trajectory forecasting.
arXiv Detail & Related papers (2020-07-07T17:19:56Z) - On the Robustness of Active Learning [0.7340017786387767]
Active Learning is concerned with how to identify the most useful samples for a Machine Learning algorithm to be trained with.
We find that it is often applied with not enough care and domain knowledge.
We propose the new "Sum of Squared Logits" method based on the Simpson diversity index and investigate the effect of using the confusion matrix for balancing in sample selection.
arXiv Detail & Related papers (2020-06-18T09:07:23Z) - Knowledge Guided Metric Learning for Few-Shot Text Classification [22.832467388279873]
We propose to introduce external knowledge into few-shot learning to imitate human knowledge.
Inspired by human intelligence, we propose to introduce external knowledge into few-shot learning to imitate human knowledge.
We demonstrate that our method outperforms the state-of-the-art few-shot text classification models.
arXiv Detail & Related papers (2020-04-04T10:56:26Z) - Inter- and Intra-domain Knowledge Transfer for Related Tasks in Deep
Character Recognition [2.320417845168326]
Pre-training a deep neural network on the ImageNet dataset is a common practice for training deep learning models.
The technique of pre-training on one task and then retraining on a new one is called transfer learning.
In this paper we analyse the effectiveness of using deep transfer learning for character recognition tasks.
arXiv Detail & Related papers (2020-01-02T14:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.