Few-shot Learning for Spatial Regression
- URL: http://arxiv.org/abs/2010.04360v1
- Date: Fri, 9 Oct 2020 04:05:01 GMT
- Title: Few-shot Learning for Spatial Regression
- Authors: Tomoharu Iwata, Yusuke Tanaka
- Abstract summary: We propose a few-shot learning method for spatial regression.
Our model is trained using spatial datasets on various attributes in various regions.
In our experiments, we demonstrate that the proposed method achieves better predictive performance than existing meta-learning methods.
- Score: 31.022722103424684
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a few-shot learning method for spatial regression. Although
Gaussian processes (GPs) have been successfully used for spatial regression,
they require many observations in the target task to achieve a high predictive
performance. Our model is trained using spatial datasets on various attributes
in various regions, and predicts values on unseen attributes in unseen regions
given a few observed data. With our model, a task representation is inferred
from given small data using a neural network. Then, spatial values are
predicted by neural networks with a GP framework, in which task-specific
properties are controlled by the task representations. The GP framework allows
us to analytically obtain predictions that are adapted to small data. By using
the adapted predictions in the objective function, we can train our model
efficiently and effectively so that the test predictive performance improves
when adapted to newly given small data. In our experiments, we demonstrate that
the proposed method achieves better predictive performance than existing
meta-learning methods using spatial datasets.
Related papers
- Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - Neural Task Success Classifiers for Robotic Manipulation from Few Real
Demonstrations [1.7205106391379026]
This paper presents a novel classifier that learns to classify task completion only from a few demonstrations.
We compare different neural classifiers, e.g. fully connected-based, fully convolutional-based, sequence2sequence-based, and domain adaptation-based classification.
Our model, i.e. fully convolutional neural network with domain adaptation and timing features, achieves an average classification accuracy of 97.3% and 95.5% across tasks.
arXiv Detail & Related papers (2021-07-01T19:58:16Z) - Locally Sparse Networks for Interpretable Predictions [7.362415721170984]
We propose a framework for training locally sparse neural networks where the local sparsity is learned via a sample-specific gating mechanism.
The sample-specific sparsity is predicted via a textitgating network, which is trained in tandem with the textitprediction network.
We demonstrate that our method outperforms state-of-the-art models when predicting the target function with far fewer features per instance.
arXiv Detail & Related papers (2021-06-11T15:46:50Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.