Optimal to-do list gamification
- URL: http://arxiv.org/abs/2008.05228v2
- Date: Tue, 3 Aug 2021 20:44:51 GMT
- Title: Optimal to-do list gamification
- Authors: Jugoslav Stojcheski, Valkyrie Felso, Falk Lieder
- Abstract summary: We introduce and evaluate a scalable method for identifying which tasks are most important in the long run and incentivizing each task according to its long-term value.
Our method makes it possible to create to-do list gamification apps that can handle the size and complexity of people's to-do lists in the real world.
- Score: 4.8986598953553555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What should I work on first? What can wait until later? Which projects should
I prioritize and which tasks are not worth my time? These are challenging
questions that many people face every day. People's intuitive strategy is to
prioritize their immediate experience over the long-term consequences. This
leads to procrastination and the neglect of important long-term projects in
favor of seemingly urgent tasks that are less important. Optimal gamification
strives to help people overcome these problems by incentivizing each task by a
number of points that communicates how valuable it is in the long-run.
Unfortunately, computing the optimal number of points with standard dynamic
programming methods quickly becomes intractable as the number of a person's
projects and the number of tasks required by each project increase. Here, we
introduce and evaluate a scalable method for identifying which tasks are most
important in the long run and incentivizing each task according to its
long-term value. Our method makes it possible to create to-do list gamification
apps that can handle the size and complexity of people's to-do lists in the
real world.
Related papers
- Adaptive Manipulation using Behavior Trees [12.061325774210392]
We present the adaptive behavior tree, which enables a robot to quickly adapt to both visual and non-visual observations during task execution.
We test our approach on a number of tasks commonly found in industrial settings.
arXiv Detail & Related papers (2024-06-20T18:01:36Z) - Can Foundation Models Watch, Talk and Guide You Step by Step to Make a
Cake? [62.59699229202307]
Despite advances in AI, it remains a significant challenge to develop interactive task guidance systems.
We created a new multimodal benchmark dataset, Watch, Talk and Guide (WTaG) based on natural interaction between a human user and a human instructor.
We leveraged several foundation models to study to what extent these models can be quickly adapted to perceptually enabled task guidance.
arXiv Detail & Related papers (2023-11-01T15:13:49Z) - Object-Centric Multi-Task Learning for Human Instances [8.035105819936808]
We explore a compact multi-task network architecture that maximally shares the parameters of the multiple tasks via object-centric learning.
We propose a novel query design to encode the human instance information effectively, called human-centric query (HCQ)
Experimental results show that the proposed multi-task network achieves comparable accuracy to state-of-the-art task-specific models.
arXiv Detail & Related papers (2023-03-13T01:10:50Z) - Reinforcement Learning with Success Induced Task Prioritization [68.8204255655161]
We introduce Success Induced Task Prioritization (SITP), a framework for automatic curriculum learning.
The algorithm selects the order of tasks that provide the fastest learning for agents.
We demonstrate that SITP matches or surpasses the results of other curriculum design methods.
arXiv Detail & Related papers (2022-12-30T12:32:43Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Optimal To-Do List Gamification for Long Term Planning [0.6882042556551609]
We release an API that makes it easy to deploy our method in Web and app services.
We extend the previous version of our optimal gamification method with added services for helping people decide which tasks should and should not be done when there is not enough time to do everything.
We test the accuracy of the incentivised to-do list by comparing the performance of the strategy with the points computed exactly using Value Iteration for a variety of case studies.
To demonstrate its functionality, we released an API that makes it easy to deploy our method in Web and app services.
arXiv Detail & Related papers (2021-09-14T08:06:01Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z) - Hierarchical Reinforcement Learning as a Model of Human Task
Interleaving [60.95424607008241]
We develop a hierarchical model of supervisory control driven by reinforcement learning.
The model reproduces known empirical effects of task interleaving.
The results support hierarchical RL as a plausible model of task interleaving.
arXiv Detail & Related papers (2020-01-04T17:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.