TIML: Task-Informed Meta-Learning for Agriculture
- URL: http://arxiv.org/abs/2202.02124v1
- Date: Fri, 4 Feb 2022 13:27:55 GMT
- Title: TIML: Task-Informed Meta-Learning for Agriculture
- Authors: Gabriel Tseng and Hannah Kerner and David Rolnick
- Abstract summary: We build on previous work exploring the use of meta-learning for agricultural contexts in data-sparse regions.
We introduce task-informed meta-learning (TIML), an augmentation to model-agnostic meta-learning which takes advantage of task-specific metadata.
- Score: 20.555341678693495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Labeled datasets for agriculture are extremely spatially imbalanced. When
developing algorithms for data-sparse regions, a natural approach is to use
transfer learning from data-rich regions. While standard transfer learning
approaches typically leverage only direct inputs and outputs, geospatial
imagery and agricultural data are rich in metadata that can inform transfer
learning algorithms, such as the spatial coordinates of data-points or the
class of task being learned. We build on previous work exploring the use of
meta-learning for agricultural contexts in data-sparse regions and introduce
task-informed meta-learning (TIML), an augmentation to model-agnostic
meta-learning which takes advantage of task-specific metadata. We apply TIML to
crop type classification and yield estimation, and find that TIML significantly
improves performance compared to a range of benchmarks in both contexts, across
a diversity of model architectures. While we focus on tasks from agriculture,
TIML could offer benefits to any meta-learning setup with task-specific
metadata, such as classification of geo-tagged images and species distribution
modelling.
Related papers
- Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - Fields of The World: A Machine Learning Benchmark Dataset For Global Agricultural Field Boundary Segmentation [12.039406240082515]
Fields of The World (FTW) is a novel benchmark dataset for agricultural field instance segmentation.
FTW is an order of magnitude larger than previous datasets with 70,462 samples.
We show that models trained on FTW have better zero-shot and fine-tuning performance in held-out countries.
arXiv Detail & Related papers (2024-09-24T17:20:58Z) - Fine-tuning Large Enterprise Language Models via Ontological Reasoning [5.12835891233968]
Large Language Models (LLMs) exploit fine-tuning as a technique to adapt to diverse goals, thanks to task-specific training data.
We propose a novel neurosymbolic architecture that leverages the power of ontological reasoning to build task- and domain-specific corpora for LLM fine-tuning.
arXiv Detail & Related papers (2023-06-19T06:48:45Z) - Set-based Meta-Interpolation for Few-Task Meta-Learning [79.4236527774689]
We propose a novel domain-agnostic task augmentation method, Meta-Interpolation, to densify the meta-training task distribution.
We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains.
arXiv Detail & Related papers (2022-05-20T06:53:03Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation [78.28390172958643]
We identify two key aspects that can help to alleviate multiple domain-shifts in the multi-target domain adaptation (MTDA)
We propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains.
When the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones.
arXiv Detail & Related papers (2021-04-01T23:41:41Z) - Simple multi-dataset detection [83.9604523643406]
We present a simple method for training a unified detector on multiple large-scale datasets.
We show how to automatically integrate dataset-specific outputs into a common semantic taxonomy.
Our approach does not require manual taxonomy reconciliation.
arXiv Detail & Related papers (2021-02-25T18:55:58Z) - Multi-source Pseudo-label Learning of Semantic Segmentation for the
Scene Recognition of Agricultural Mobile Robots [0.6445605125467573]
This paper describes a novel method of training a semantic segmentation model for environment recognition of agricultural mobile robots by unsupervised domain adaptation.
We propose to use multiple publicly available datasets of outdoor images as source datasets.
We demonstrate in experiments that by combining our proposed method of pseudo-label generation with the existing training method, the performance was improved by up to 14.3% of mIoU.
arXiv Detail & Related papers (2021-02-12T08:17:10Z) - DAGA: Data Augmentation with a Generation Approach for Low-resource
Tagging Tasks [88.62288327934499]
We propose a novel augmentation method with language models trained on the linearized labeled sentences.
Our method is applicable to both supervised and semi-supervised settings.
arXiv Detail & Related papers (2020-11-03T07:49:15Z) - Meta-Learning for Few-Shot Land Cover Classification [3.8529010979482123]
We evaluate the model-agnostic meta-learning (MAML) algorithm on classification and segmentation tasks.
We find that few-shot model adaptation outperforms pre-training with regular gradient descent.
This indicates that model optimization with meta-learning may benefit tasks in the Earth sciences.
arXiv Detail & Related papers (2020-04-28T09:42:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.