MS-LaTTE: A Dataset of Where and When To-do Tasks are Completed
- URL: http://arxiv.org/abs/2111.06902v1
- Date: Fri, 12 Nov 2021 19:01:06 GMT
- Title: MS-LaTTE: A Dataset of Where and When To-do Tasks are Completed
- Authors: Sujay Kumar Jauhar, Nirupama Chandrasekaran, Michael Gamon and Ryen W.
White
- Abstract summary: We release a novel, real-life, large-scale dataset called MS-LaTTE.
It captures two core aspects of the context surrounding task completion: location and time.
We test the dataset on the two problems of predicting spatial and temporal task co-occurrence.
- Score: 14.009925631455092
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Tasks are a fundamental unit of work in the daily lives of people, who are
increasingly using digital means to keep track of, organize, triage and act on
them. These digital tools -- such as task management applications -- provide a
unique opportunity to study and understand tasks and their connection to the
real world, and through intelligent assistance, help people be more productive.
By logging signals such as text, timestamp information, and social connectivity
graphs, an increasingly rich and detailed picture of how tasks are created and
organized, what makes them important, and who acts on them, can be
progressively developed. Yet the context around actual task completion remains
fuzzy, due to the basic disconnect between actions taken in the real world and
telemetry recorded in the digital world. Thus, in this paper we compile and
release a novel, real-life, large-scale dataset called MS-LaTTE that captures
two core aspects of the context surrounding task completion: location and time.
We describe our annotation framework and conduct a number of analyses on the
data that were collected, demonstrating that it captures intuitive contextual
properties for common tasks. Finally, we test the dataset on the two problems
of predicting spatial and temporal task co-occurrence, concluding that
predictors for co-location and co-time are both learnable, with a BERT
fine-tuned model outperforming several other baselines. The MS-LaTTE dataset
provides an opportunity to tackle many new modeling challenges in contextual
task understanding and we hope that its release will spur future research in
task intelligence more broadly.
Related papers
- Get Rid of Task Isolation: A Continuous Multi-task Spatio-Temporal Learning Framework [10.33844348594636]
We argue that there is an essential to propose a Continuous Multi-task Spatiotemporal learning framework (CMuST) to empower collective urban intelligence.
CMuST reforms the urbantemporal learning from singledomain to cooperatively multi-task learning.
We establish a benchmark of three cities for multi-tasktemporal learning, and empirically demonstrate the superiority of CMuST.
arXiv Detail & Related papers (2024-10-14T14:04:36Z) - Prompt-Based Spatio-Temporal Graph Transfer Learning [22.855189872649376]
We propose a prompt-based framework capable of adapting to multi-diverse tasks in a data-scarce domain.
We employ learnable prompts to achieve domain and task transfer in a two-stage pipeline.
Our experiments demonstrate that STGP outperforms state-of-the-art baselines in three tasks-forecasting, kriging, and extrapolation-achieving an improvement of up to 10.7%.
arXiv Detail & Related papers (2024-05-21T02:06:40Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Generalization with Lossy Affordances: Leveraging Broad Offline Data for
Learning Visuomotor Tasks [65.23947618404046]
We introduce a framework that acquires goal-conditioned policies for unseen temporally extended tasks via offline reinforcement learning on broad data.
When faced with a novel task goal, the framework uses an affordance model to plan a sequence of lossy representations as subgoals that decomposes the original task into easier problems.
We show that our framework can be pre-trained on large-scale datasets of robot experiences from prior work and efficiently fine-tuned for novel tasks, entirely from visual inputs without any manual reward engineering.
arXiv Detail & Related papers (2022-10-12T21:46:38Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - FETA: A Benchmark for Few-Sample Task Transfer in Open-Domain Dialogue [70.65782786401257]
This work explores conversational task transfer by introducing FETA: a benchmark for few-sample task transfer in open-domain dialogue.
FETA contains two underlying sets of conversations upon which there are 10 and 7 tasks annotated, enabling the study of intra-dataset task transfer.
We utilize three popular language models and three learning algorithms to analyze the transferability between 132 source-target task pairs.
arXiv Detail & Related papers (2022-05-12T17:59:00Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.